zhqwqwq / Learning-Parity-with-CoTLinks
[ICLR 2025] This repository contains the code to reproduce the results from our paper From Sparse Dependence to Sparse Attention: Unveiling How Chain-of-Thought Enhances Transformer Sample Efficiency.
☆12Updated 10 months ago
Alternatives and similar repositories for Learning-Parity-with-CoT
Users that are interested in Learning-Parity-with-CoT are comparing it to the libraries listed below
Sorting:
- ☆24Updated last year
- Implementation of PatchSAE as presented in "Sparse autoencoders reveal selective remapping of visual concepts during adaptation"☆28Updated 3 months ago
- ☆27Updated 2 months ago
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆18Updated 6 months ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Updated last year
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…