SHI-Labs / CompactNetLinks
☆31Updated last year
Alternatives and similar repositories for CompactNet
Users that are interested in CompactNet are comparing it to the libraries listed below
Sorting:
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆34Updated last year
- ☆57Updated 11 months ago
- ☆85Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆41Updated last year
- ☆36Updated last week
- Here we will test various linear attention designs.☆62Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- ☆34Updated 8 months ago
- Code for the paper "Function-Space Learning Rates"☆23Updated 3 months ago
- Fork of Flame repo for training of some new stuff in development☆17Updated last week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆30Updated 10 months ago
- ☆34Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆67Updated 3 months ago
- A repository for research on medium sized language models.☆77Updated last year
- ☆40Updated 5 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆68Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆90Updated 2 weeks ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 3 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆146Updated 3 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆109Updated last month
- Efficient Scaling laws and collaborative pretraining.☆18Updated 7 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Lottery Ticket Adaptation☆39Updated 9 months ago