graphcore-research / jax-scalifyLinks
JAX Scalify: end-to-end scaled arithmetics
☆16Updated 9 months ago
Alternatives and similar repositories for jax-scalify
Users that are interested in jax-scalify are comparing it to the libraries listed below
Sorting:
- Code for the paper "Function-Space Learning Rates"☆23Updated 2 months ago
- Here we will test various linear attention designs.☆62Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆37Updated 5 months ago
- DPO, but faster 🚀☆44Updated 8 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 3 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆18Updated 9 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆17Updated 9 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆42Updated 3 weeks ago
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- ☆53Updated 10 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 11 months ago
- ☆19Updated 2 months ago
- ☆34Updated 10 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 2 months ago
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆67Updated last week
- Utilities for Training Very Large Models☆58Updated 10 months ago
- ☆45Updated last year
- ☆83Updated 11 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- Fork of Flame repo for training of some new stuff in development☆14Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- This repository contains code for the MicroAdam paper.☆19Updated 7 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 7 months ago
- ☆81Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year