AIRI-Institute / SAE-Reasoning
☆21Updated this week
Alternatives and similar repositories for SAE-Reasoning:
Users that are interested in SAE-Reasoning are comparing it to the libraries listed below
- ☆24Updated last year
- ☆71Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 5 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆44Updated 3 weeks ago
- ☆74Updated 7 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆44Updated last month
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆60Updated 2 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated 8 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆16Updated 3 weeks ago
- ☆96Updated 9 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 4 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆26Updated 3 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- Prune transformer layers☆68Updated 10 months ago
- DPO, but faster 🚀☆40Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Exploration of automated dataset selection approaches at large scales.☆34Updated 3 weeks ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆28Updated last week
- ☆41Updated 3 weeks ago
- The official implementation of Self-Exploring Language Models (SELM)☆62Updated 9 months ago
- ☆32Updated 3 weeks ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆24Updated 4 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆77Updated this week
- ☆47Updated 7 months ago
- ☆49Updated 7 months ago
- ☆30Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated this week
- ☆16Updated 2 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆23Updated 2 weeks ago