rimads / avey-dpaLinks
Code for the paper Don't Pay Attention
☆45Updated last week
Alternatives and similar repositories for avey-dpa
Users that are interested in avey-dpa are comparing it to the libraries listed below
Sorting:
- ☆34Updated 9 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆19Updated last month
- Collection of autoregressive model implementation☆85Updated 2 months ago
- ☆19Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆127Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆52Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 6 months ago
- ☆61Updated 7 months ago
- Code for the paper "Function-Space Learning Rates"☆20Updated 3 weeks ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆52Updated last year
- ☆79Updated 10 months ago
- ☆81Updated last year
- Focused on fast experimentation and simplicity☆74Updated 6 months ago
- Utilities for PyTorch distributed☆24Updated 3 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 4 months ago
- ☆78Updated 11 months ago
- Latent Diffusion Language Models☆68Updated last year
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆103Updated 2 months ago
- σ-GPT: A New Approach to Autoregressive Models☆65Updated 10 months ago
- ☆21Updated 7 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆80Updated 10 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- ☆31Updated last year
- This repository contains the implementation of **Alternators**, a novel family of generative models for time-dependent data.☆35Updated 2 weeks ago
- ☆53Updated 8 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 weeks ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 weeks ago
- A basic pure pytorch implementation of flash attention☆16Updated 7 months ago