OpenDocCN / python-code-anlsLinks
☆42Updated 11 months ago
Alternatives and similar repositories for python-code-anls
Users that are interested in python-code-anls are comparing it to the libraries listed below
Sorting:
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆128Updated last year
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆108Updated last year
- 多模态 MM +Chat 合集☆279Updated 4 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆128Updated 4 months ago
- 这是一个DiT-pytorch的代码,主要用于学习DiT结构。☆84Updated last year
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆184Updated 2 years ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆39Updated last year
- DeepSpeed Tutorial☆104Updated last year
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆146Updated 10 months ago
- Margin-based Vision Transformer☆60Updated last month
- Research Code for Multimodal-Cognition Team in Ant Group☆169Updated 2 months ago
- Precision Search through Multi-Style Inputs☆73Updated 5 months ago
- ☆83Updated 4 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆82Updated this week
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆120Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆107Updated 6 months ago
- 【CVer出品】旨在盘点最全面的计算机视觉方向☆36Updated 2 years ago
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆269Updated 2 months ago
- ☆82Updated 7 months ago
- Building a VLM model starts from the basic module.☆18Updated last year
- New generation of CLIP with fine grained discrimination capability, ICML2025☆518Updated 2 months ago
- ☆31Updated last year
- Pytorch分布式训练框架☆84Updated 3 weeks ago
- Toward Universal Multimodal Embedding☆72Updated 5 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆297Updated 4 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆145Updated 2 weeks ago
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆255Updated last year
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆38Updated 7 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆97Updated 3 weeks ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆51Updated 5 months ago