rachtsy / KPCA_codeLinks
Implementation for robust ViT and scaled attention
☆21Updated 9 months ago
Alternatives and similar repositories for KPCA_code
Users that are interested in KPCA_code are comparing it to the libraries listed below
Sorting:
- Fork of Flame repo for training of some new stuff in development☆19Updated 2 weeks ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Minimum Description Length probing for neural network representations☆20Updated 11 months ago
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 7 months ago
- ☆36Updated 2 months ago
- ☆34Updated last year
- ☆62Updated last year
- Code for the paper "Function-Space Learning Rates"☆23Updated 7 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- ☆15Updated 9 months ago
- ☆35Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆62Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- ☆24Updated last year
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14Updated 7 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆82Updated 7 months ago
- Jax like function transformation engine but micro, microjax☆34Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- 📄Small Batch Size Training for Language Models☆79Updated 3 months ago
- ☆56Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆92Updated last year
- ☆82Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago
- Collection of autoregressive model implementation☆85Updated last week
- ☆91Updated last year