bluorion-com / ZClipLinks
Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".
☆127Updated 3 weeks ago
Alternatives and similar repositories for ZClip
Users that are interested in ZClip are comparing it to the libraries listed below
Sorting:
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆85Updated last week
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆52Updated 3 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆241Updated 2 months ago
- Implementation of the proposed MaskBit from Bytedance AI☆82Updated 7 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆108Updated 6 months ago
- Focused on fast experimentation and simplicity☆74Updated 6 months ago
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆52Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 10 months ago
- ☆216Updated 2 weeks ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆85Updated 2 weeks ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆102Updated last year
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆173Updated last year
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- Just another reasonably minimal repo for class-conditional training of pixel-space diffusion transformers.☆106Updated 3 weeks ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 10 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆91Updated last week
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆57Updated last month
- Normalized Transformer (nGPT)☆184Updated 7 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆108Updated last month
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆64Updated 3 weeks ago
- ☆56Updated 3 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 6 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆122Updated 10 months ago
- supporting pytorch FSDP for optimizers☆82Updated 6 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆88Updated 4 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- ☆76Updated 4 months ago