mcleish7 / gemstone-scaling-laws
☆24Updated 3 months ago
Alternatives and similar repositories for gemstone-scaling-laws
Users that are interested in gemstone-scaling-laws are comparing it to the libraries listed below
Sorting:
- ☆19Updated 10 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 3 months ago
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- ☆31Updated 4 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 5 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆79Updated last month
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 6 months ago
- Exploration of automated dataset selection approaches at large scales.☆40Updated 2 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆29Updated last month
- ☆28Updated 10 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆23Updated 3 months ago
- Efficient Scaling laws and collaborative pretraining.☆16Updated 3 months ago
- ☆18Updated 3 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆27Updated 2 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆10Updated last month
- Sparse Autoencoder Training Library☆49Updated 2 weeks ago
- ☆14Updated last year
- ☆33Updated 4 months ago
- ☆54Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆14Updated 8 months ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- https://footprints.baulab.info☆17Updated 7 months ago
- Universal Neurons in GPT2 Language Models☆29Updated 11 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 11 months ago
- ☆49Updated last year
- ☆42Updated last year