phlippe / liger_kernels
JAX Implementation of Liger Kernels
☆8Updated 2 months ago
Alternatives and similar repositories for liger_kernels:
Users that are interested in liger_kernels are comparing it to the libraries listed below
- ☆37Updated 9 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated 11 months ago
- ☆29Updated 3 months ago
- ☆50Updated 3 months ago
- ☆24Updated last month
- ☆19Updated 3 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated this week
- Lightning-like training API for JAX with Flax☆36Updated last month
- ☆31Updated 9 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated last month
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆58Updated 3 months ago
- ☆20Updated 8 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 6 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 3 months ago
- ☆29Updated 10 months ago
- Einsum-like high-level array sharding API for JAX☆33Updated 6 months ago
- supporting pytorch FSDP for optimizers☆75Updated last month
- A system for automating selection and optimization of pre-trained models from the TAO Model Zoo☆24Updated 6 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆67Updated 2 months ago
- A basic pure pytorch implementation of flash attention☆16Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- Machine Learning eXperiment Utilities☆45Updated 7 months ago
- ☆30Updated 8 months ago
- ☆51Updated 7 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆14Updated this week
- ☆31Updated last month
- This is a port of Mistral-7B model in JAX☆30Updated 6 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆70Updated last month