apple / ml-fastvlmLinks
This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025
☆7,094Updated 7 months ago
Alternatives and similar repositories for ml-fastvlm
Users that are interested in ml-fastvlm are comparing it to the libraries listed below
Sorting:
- Everything about the SmolLM and SmolVLM family of models☆3,516Updated last month
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆4,468Updated 2 months ago
- Kernels & AI inference engine for mobile devices.☆3,936Updated last week
- Renderer for the harmony response format to be used with gpt-oss☆4,111Updated 2 weeks ago
- OmniGen2: Exploration to Advanced Multimodal Generation. https://arxiv.org/abs/2506.18871☆3,979Updated last month
- A text-to-speech (TTS), speech-to-text (STT) and speech-to-speech (STS) library built on Apple's MLX framework, providing efficient speec…☆3,153Updated this week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,967Updated last week
- Text-audio foundation model from Boson AI☆7,790Updated 3 months ago
- Run LLMs with MLX☆3,160Updated this week
- MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.☆3,022Updated 5 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,618Updated 4 months ago
- Open-source unified multimodal model☆5,525Updated 2 months ago
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,884Updated 2 months ago
- Sharp Monocular View Synthesis in Less Than a Second☆5,356Updated 2 weeks ago
- Kimi K2 is the large language model series developed by Moonshot AI team☆9,784Updated last month
- This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025☆1,357Updated 2 months ago
- Qwen3-omni is a natively end-to-end, omni-modal LLM developed by the Qwen team at Alibaba Cloud, capable of understanding text, audio, im…☆3,185Updated 2 months ago
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆5,135Updated 8 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,436Updated 6 months ago
- Examples using MLX Swift☆2,360Updated 2 weeks ago
- [NeurIPS 2025] SpatialLM: Training Large Language Models for Structured Indoor Modeling☆4,150Updated 3 months ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,859Updated 6 months ago
- The repository provides code for running inference with the Meta Segment Anything Audio Model (SAM-Audio), links for downloading the trai…☆2,672Updated this week
- Qwen-Image is a powerful image generation foundation model capable of complex text rendering and precise image editing.☆6,664Updated this week
- A course of learning LLM inference serving on Apple Silicon for systems engineers: build a tiny vLLM + Qwen.☆3,586Updated 2 weeks ago
- Embedding Atlas is a tool that provides interactive visualizations for large embeddings. It allows you to visualize, cross-filter, and se…☆4,487Updated last week
- Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation☆4,422Updated 6 months ago
- RF-DETR is a real-time object detection and segmentation model architecture developed by Roboflow, SOTA on COCO and designed for fine-tun…☆4,996Updated last month
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,724Updated 8 months ago
- The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading t…☆6,659Updated last week