lucasjinreal / Namo-R1Links
A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.
☆235Updated 5 months ago
Alternatives and similar repositories for Namo-R1
Users that are interested in Namo-R1 are comparing it to the libraries listed below
Sorting:
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆204Updated 8 months ago
- ☆185Updated 8 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆257Updated 2 weeks ago
- Quick exploration into fine tuning florence 2☆331Updated last year
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆110Updated last year
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆272Updated last month
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆101Updated 11 months ago
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆122Updated 3 months ago
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆236Updated 2 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆387Updated 5 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆252Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆99Updated last year
- 将SmolVLM2的视 觉头与Qwen3-0.6B模型进行了拼接微调☆380Updated last month
- ☆371Updated 8 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆167Updated 3 months ago
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆162Updated 2 weeks ago
- Vision Manus: Your versatile Visual AI assistant☆277Updated last week
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆413Updated 4 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆129Updated 8 months ago
- Codebase for the Recognize Anything Model (RAM)☆85Updated last year
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆225Updated 4 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆142Updated 4 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 8 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆211Updated last year
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆121Updated last month
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆155Updated last year
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆252Updated last month
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆86Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆569Updated 11 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆553Updated 3 months ago