hyperevolnet / TerminatorLinks
The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.
☆40Updated 6 months ago
Alternatives and similar repositories for Terminator
Users that are interested in Terminator are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- The official github repo for "Diffusion Language Models are Super Data Learners".☆139Updated this week
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 5 months ago
- Mixture of A Million Experts☆48Updated last year
- ☆81Updated last year
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆161Updated 9 months ago
- Autoregressive Image Generation☆32Updated 4 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆131Updated last month
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated last month
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆228Updated 2 weeks ago
- ☆86Updated last year
- NeuMeta transforms neural networks by allowing a single model to adapt on the fly to different sizes, generating the right weights when n…☆43Updated 11 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆59Updated last year
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- WIP☆93Updated last year
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- ☆58Updated last year
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆74Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated 3 weeks ago
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆123Updated this week
- Official code for the paper "Attention as a Hypernetwork"☆45Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆61Updated 11 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- ☆302Updated 6 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆106Updated last year