kyegomez / MultiModalMambaLinks
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
☆460Updated last week
Alternatives and similar repositories for MultiModalMamba
Users that are interested in MultiModalMamba are comparing it to the libraries listed below
Sorting:
- Code repository for Black Mamba☆254Updated last year
- Build high-performance AI models with modular building blocks☆545Updated last week
- ☆712Updated last year
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆186Updated last year
- 👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]☆630Updated last year
- run paligemma in real time☆131Updated last year
- ☆447Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆167Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆559Updated 8 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆904Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆667Updated last year
- Implementation of I-JEPA from "Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture"☆272Updated 7 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆758Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆246Updated 7 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆291Updated last year
- Implementation of DoRA☆301Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆352Updated last month
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆202Updated 2 weeks ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆928Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆328Updated last year
- ☆223Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆235Updated last year
- PyTorch implementation of models from the Zamba2 series.☆184Updated 7 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆183Updated last week
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- ☆416Updated last year
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆186Updated 2 weeks ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆88Updated last year