apple / ml-fastvlmLinks
This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025
☆7,029Updated 7 months ago
Alternatives and similar repositories for ml-fastvlm
Users that are interested in ml-fastvlm are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆4,380Updated last month
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,910Updated last week
- Run LLMs with MLX☆3,003Updated this week
- This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025☆1,332Updated 2 months ago
- Open-source unified multimodal model☆5,444Updated last month
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆3,010Updated this week
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,872Updated 2 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,445Updated 3 weeks ago
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,062Updated 2 months ago
- Contexts Optical Compression☆21,287Updated last month
- Real-time webcam demo with SmolVLM and llama.cpp server☆4,838Updated 7 months ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,846Updated 6 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,529Updated 3 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆16,985Updated 2 weeks ago
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆6,359Updated last month
- Embedding Atlas is a tool that provides interactive visualizations for large embeddings. It allows you to visualize, cross-filter, and se…☆4,445Updated last week
- Kimi K2 is the large language model series developed by Moonshot AI team☆9,690Updated last month
- [NeurIPS 2025] SpatialLM: Training Large Language Models for Structured Indoor Modeling☆4,112Updated 2 months ago
- RF-DETR is a real-time object detection and segmentation model architecture developed by Roboflow, SOTA on COCO and designed for fine-tun…☆4,624Updated last month
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆5,053Updated 7 months ago
- The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading t…☆5,698Updated this week
- MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.☆3,001Updated 5 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,423Updated 5 months ago
- Reference PyTorch implementation and models for DINOv3☆8,731Updated 3 weeks ago
- GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆1,761Updated last month
- Examples using MLX Swift☆2,333Updated last week
- StarVector is a foundation model for SVG generation that transforms vectorization into a code generation task. Using a vision-language mo…☆4,136Updated last month
- SAM 3D Objects☆4,764Updated this week
- Renderer for the harmony response format to be used with gpt-oss☆4,077Updated last month
- Solve Visual Understanding with Reinforced VLMs☆5,742Updated last month