AIRI-Institute / SAE-Reasoning
☆54Updated last month
Alternatives and similar repositories for SAE-Reasoning:
Users that are interested in SAE-Reasoning are comparing it to the libraries listed below
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆29Updated 3 weeks ago
- ☆93Updated last year
- ☆25Updated last year
- ☆72Updated 6 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆66Updated 3 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆54Updated 6 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆51Updated 2 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆163Updated 3 weeks ago
- ☆51Updated 3 weeks ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆50Updated last year
- Directional Preference Alignment☆57Updated 7 months ago
- ☆97Updated 10 months ago
- A library for efficient patching and automatic circuit discovery.☆64Updated 2 weeks ago
- Long Context Extension and Generalization in LLMs☆53Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆58Updated last month
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆41Updated 3 weeks ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆32Updated 6 months ago
- ☆40Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆73Updated last year
- ☆111Updated 5 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆51Updated 3 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated 2 months ago
- ☆84Updated last year
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆25Updated 5 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆54Updated last month
- ☆45Updated last week
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆60Updated 4 months ago