lucidrains / mirasol-pytorch
Implementation of π» Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch
β88Updated 11 months ago
Related projects β
Alternatives and complementary repositories for mirasol-pytorch
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsβ109Updated last month
- Implementation of Infini-Transformer in Pytorchβ104Updated last month
- Implementation of a multimodal diffusion transformer in Pytorchβ97Updated 4 months ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorchβ95Updated last year
- Language Quantized AutoEncodersβ94Updated last year
- Explorations into the recently proposed Taylor Series Linear Attentionβ90Updated 3 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonettoβ53Updated 6 months ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorchβ97Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β119Updated 3 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"β103Updated 3 months ago
- Implementation of GateLoop Transformer in Pytorch and Jaxβ86Updated 5 months ago
- M4 experiment logbookβ56Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ248Updated 6 months ago
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmindβ53Updated 2 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPTβ205Updated 3 months ago
- Randomized Positional Encodings Boost Length Generalization of Transformersβ78Updated 8 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorchβ95Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)β43Updated last month
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)β66Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindβ112Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Modeβ¦β79Updated 2 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountβ¦β49Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802β88Updated last year
- Implementation of Agent Attention in Pytorchβ86Updated 4 months ago
- Implementation of the Llama architecture with RLHF + Q-learningβ157Updated 11 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"β186Updated last year
- Mixture of A Million Expertsβ32Updated 3 months ago
- LL3M: Large Language and Multi-Modal Model in Jaxβ65Updated 7 months ago
- β62Updated 2 months ago