kq-chen / qwen-vl-utilsLinks
helper functions for processing and integrating visual language information with Qwen-VL Series Model
☆16Updated last year
Alternatives and similar repositories for qwen-vl-utils
Users that are interested in qwen-vl-utils are comparing it to the libraries listed below
Sorting:
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- Our 2nd-gen LMM☆34Updated last year
- ☆75Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆98Updated last year
- ☆29Updated last year
- ☆17Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆35Updated 6 months ago
- ☆64Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Updated 2 years ago
- ☆50Updated 2 years ago
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆29Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- ☆87Updated 4 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆47Updated 10 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 10 months ago
- [NAACL 2025] Representing Rule-based Chatbots with Transformers☆23Updated 10 months ago
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated 2 years ago
- WanJuan-CC是以CommonCrawl为基础,经过数据抽取,规则清洗,去重,安全过滤,质量清洗等步骤得到的高质量数据。☆14Updated last year
- ☆110Updated last month
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 7 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆65Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- Official Repo for the paper: VCR: Visual Caption Restoration. Check arxiv.org/pdf/2406.06462 for details.☆32Updated 10 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆36Updated 9 months ago
- ☆187Updated 10 months ago