microsoft / MagmaLinks
[CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents
☆1,889Updated 3 months ago
Alternatives and similar repositories for Magma
Users that are interested in Magma are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.☆1,639Updated 7 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,759Updated 4 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,518Updated 7 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,440Updated 6 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,110Updated last month
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,236Updated last week
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,426Updated 3 months ago
- An open-sourced end-to-end VLM-based GUI Agent☆1,123Updated 9 months ago
- ☆990Updated 9 months ago
- GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆2,119Updated 3 weeks ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,136Updated 6 months ago
- Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of multi-modal AI agents.☆808Updated 8 months ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,084Updated 11 months ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,878Updated 7 months ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,090Updated 5 months ago
- Code for the Molmo Vision-Language Model☆856Updated last year
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,068Updated 3 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆876Updated last week
- ZeroSearch: Incentivize the Search Capability of LLMs without Searching☆1,223Updated 5 months ago
- Code release for "LLMs can see and hear without any training"☆458Updated 8 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,552Updated last month
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,718Updated last month
- OctoTools: An agentic framework with extensible tools for complex reasoning☆1,401Updated 3 months ago
- ☆1,194Updated 2 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,556Updated 2 months ago
- Witness the aha moment of VLM with less than $3.☆4,016Updated 7 months ago
- Next-Token Prediction is All You Need☆2,274Updated this week
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,470Updated last week
- Official Implementation of "KBLaM: Knowledge Base augmented Language Model"☆1,434Updated 3 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,298Updated 6 months ago