kklemon / FlashPerceiverLinks
Fast and memory efficient PyTorch implementation of the Perceiver with FlashAttention.
☆26Updated 7 months ago
Alternatives and similar repositories for FlashPerceiver
Users that are interested in FlashPerceiver are comparing it to the libraries listed below
Sorting:
- ☆65Updated 11 months ago
- ☆53Updated last year
- Exploration into the Firefly algorithm in Pytorch☆39Updated 3 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- ☆33Updated 4 months ago
- ☆27Updated last year
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆80Updated last week
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆50Updated 6 months ago
- Focused on fast experimentation and simplicity☆73Updated 5 months ago
- ☆32Updated 11 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 9 months ago
- σ-GPT: A New Approach to Autoregressive Models☆64Updated 9 months ago
- ☆29Updated 6 months ago
- Unofficial implementation of GotenNet, new SOTA 3d equivariant transformer, in Pytorch☆62Updated last month
- ☆22Updated last week
- ☆32Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated last week
- Efficient World Models with Context-Aware Tokenization. ICML 2024☆100Updated 8 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆125Updated this week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆50Updated this week
- Implementation of the proposed Spline-Based Transformer from Disney Research☆92Updated 6 months ago
- ☆78Updated 11 months ago
- Implementation and explorations into Blackbox Gradient Sensing (BGS), an evolutionary strategies approach proposed in a Google Deepmind p…☆13Updated this week
- Normalized Transformer (nGPT)☆181Updated 6 months ago
- Official Code for Paper "Think While You Generate: Discrete Diffusion with Planned Denoising" [ICLR 2025]☆61Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆127Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Implementation of a framework for Genie2 in Pytorch☆148Updated 4 months ago
- A basic pure pytorch implementation of flash attention☆16Updated 7 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆104Updated 3 weeks ago