OpenDocCN / python-code-anlsLinks
☆44Updated 6 months ago
Alternatives and similar repositories for python-code-anls
Users that are interested in python-code-anls are comparing it to the libraries listed below
Sorting:
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆114Updated last year
- ☆45Updated 2 months ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆96Updated 9 months ago
- 多模态 MM +Chat 合集☆273Updated 2 months ago
- 这是一个DiT-pytorch的代码,主要用于学习DiT结构。☆78Updated last year
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆173Updated last year
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆244Updated 2 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆111Updated last week
- DeepSpeed Tutorial☆100Updated 11 months ago
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆35Updated this week
- New generation of CLIP with fine grained discrimination capability, ICML2025☆259Updated last week
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆107Updated 11 months ago
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆222Updated last year
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆217Updated last year
- ☆78Updated 2 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆83Updated last month
- Implementation of Denoising Diffusion Probabilistic Model in MindSpore☆40Updated 2 years ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆107Updated 6 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆140Updated 2 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆161Updated last month
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆386Updated last year
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆45Updated 2 weeks ago
- [COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation☆145Updated last month
- Collection of image and video datasets for generative AI and multimodal visual AI☆31Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆73Updated last month
- [arXiv'25] Official Implementation of "Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning"☆29Updated last month
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆91Updated last month