KindXiaoming / physics_of_skill_learningLinks
We study toy models of skill learning.
☆31Updated last week
Alternatives and similar repositories for physics_of_skill_learning
Users that are interested in physics_of_skill_learning are comparing it to the libraries listed below
Sorting:
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Updated last year
- ☆91Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- ☆56Updated last year
- ☆48Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- ☆82Updated last year
- Replicating O1 inference-time scaling laws☆93Updated last year
- Minimum Description Length probing for neural network representations☆20Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆32Updated 9 months ago
- ☆39Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆123Updated last year
- ☆33Updated last year
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- Lottery Ticket Adaptation☆40Updated last year
- Simple repository for training small reasoning models☆49Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 9 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated 2 years ago
- MatFormer repo☆70Updated last year
- Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆31Updated 8 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆135Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated last year