SamsungSAILMontreal / ninoLinks
Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [ICLR 2025]
☆27Updated 3 months ago
Alternatives and similar repositories for nino
Users that are interested in nino are comparing it to the libraries listed below
Sorting:
- ☆29Updated 2 months ago
- ☆91Updated last year
- ☆82Updated last year
- ☆158Updated 4 months ago
- [ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)☆64Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated 2 months ago
- ☆33Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆56Updated last year
- Universal Reasoning Model☆121Updated 2 weeks ago
- A repository for research on medium sized language models.☆77Updated last year
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆125Updated 2 months ago
- ☆33Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- Official repo of paper LM2☆46Updated 11 months ago
- ☆62Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 3 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆62Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Code for the paper Don't Pay Attention☆51Updated 4 months ago
- ☆167Updated 5 months ago
- KV Cache Steering for Inducing Reasoning in Small Language Models☆44Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- ☆40Updated last year
- 📄Small Batch Size Training for Language Models☆80Updated 3 months ago
- ☆15Updated 9 months ago
- ☆33Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Updated 3 weeks ago
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year