kyegomez / MultiModalMambaLinks
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
☆453Updated last month
Alternatives and similar repositories for MultiModalMamba
Users that are interested in MultiModalMamba are comparing it to the libraries listed below
Sorting:
- Code repository for Black Mamba☆249Updated last year
- Build high-performance AI models with modular building blocks☆533Updated this week
- ☆710Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆555Updated 6 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆888Updated 2 months ago
- ☆447Updated last year
- 👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]☆624Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆316Updated last year
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆184Updated last year
- run paligemma in real time☆131Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆749Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆644Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆197Updated 3 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆926Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆227Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆290Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,323Updated 2 months ago
- ☆864Updated last year
- Implementation of I-JEPA from "Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture"☆273Updated 6 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆189Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆244Updated 5 months ago
- Reference implementation of Megalodon 7B model☆520Updated last month
- Beyond Language Models: Byte Models are Digital World Simulators☆324Updated last year
- LLaVA-Interactive-Demo☆374Updated 11 months ago
- Build your own visual reasoning model☆395Updated this week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆172Updated 3 months ago
- ☆415Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated 11 months ago