joey00072 / microjaxLinks
Jax like function transformation engine but micro, microjax
☆33Updated 9 months ago
Alternatives and similar repositories for microjax
Users that are interested in microjax are comparing it to the libraries listed below
Sorting:
- ☆27Updated last year
- ☆81Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated last week
- H-Net Dynamic Hierarchical Architecture☆65Updated last week
- ☆53Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Parallel Associative Scan for Language Models☆18Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 3 months ago
- LLM training in simple, raw C/CUDA☆15Updated 7 months ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆93Updated this week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 7 months ago
- train with kittens!☆61Updated 9 months ago
- Simple repository for training small reasoning models☆32Updated 5 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated last month
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆38Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- ☆83Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 10 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- ☆34Updated 10 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month
- Collection of autoregressive model implementation☆86Updated 3 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- ☆21Updated last year
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆71Updated last month
- Code for☆27Updated 7 months ago