rimads / avey-dpaLinks
Code for the paper Don't Pay Attention
☆50Updated last month
Alternatives and similar repositories for avey-dpa
Users that are interested in avey-dpa are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆72Updated last week
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 3 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- ☆86Updated last year
- H-Net Dynamic Hierarchical Architecture☆80Updated last month
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆116Updated last week
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 7 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- ☆81Updated last year
- ☆19Updated 5 months ago
- ☆46Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆56Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆103Updated last week
- Lightweight package that tracks and summarizes code changes using LLMs (Large Language Models)☆34Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated 2 weeks ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 4 months ago
- Fork of Flame repo for training of some new stuff in development☆18Updated 3 weeks ago
- ☆34Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- ☆34Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- ☆65Updated 7 months ago