SHI-Labs / CompactNet
☆31Updated 11 months ago
Alternatives and similar repositories for CompactNet:
Users that are interested in CompactNet are comparing it to the libraries listed below
- Minimal Implementation of Visual Autoregressive Modelling (VAR)☆29Updated 3 weeks ago
- ☆31Updated last year
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆26Updated 11 months ago
- Implementation of Spectral State Space Models☆16Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆52Updated 6 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆18Updated last month
- ☆77Updated 7 months ago
- ☆53Updated last year
- Unofficial Implementation of Selective Attention Transformer☆16Updated 5 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- ☆37Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆46Updated last month
- Jax like function transformation engine but micro, microjax☆30Updated 5 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆65Updated 6 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 weeks ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- ☆31Updated 3 months ago
- ☆32Updated last year
- ☆59Updated 5 months ago
- A State-Space Model with Rational Transfer Function Representation.☆78Updated 10 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆24Updated 5 months ago
- Train, tune, and infer Bamba model☆88Updated 2 months ago
- Mixture of A Million Experts☆43Updated 8 months ago
- ☆11Updated 10 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated 2 months ago
- Code for☆27Updated 3 months ago