kklemon / FlashPerceiverLinks
Fast and memory efficient PyTorch implementation of the Perceiver with FlashAttention.
☆31Updated last year
Alternatives and similar repositories for FlashPerceiver
Users that are interested in FlashPerceiver are comparing it to the libraries listed below
Sorting:
- σ-GPT: A New Approach to Autoregressive Models☆70Updated last year
- Flash Attention Triton kernel with support for second-order derivatives☆121Updated this week
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 months ago
- ☆34Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Updated last year
- ☆35Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- ☆32Updated last year
- ☆53Updated last year
- ☆34Updated last year
- 📄Small Batch Size Training for Language Models☆68Updated 2 months ago
- ☆79Updated last year
- Focused on fast experimentation and simplicity☆76Updated 11 months ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆27Updated 4 months ago
- ☆122Updated 6 months ago
- ☆82Updated last year
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆115Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆42Updated last month
- Code for the paper "Function-Space Learning Rates"☆23Updated 6 months ago
- ☆19Updated 2 weeks ago
- WIP☆93Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆80Updated 6 months ago
- Efficient World Models with Context-Aware Tokenization. ICML 2024☆114Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Normalized Transformer (nGPT)☆193Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Here we will test various linear attention designs.☆62Updated last year