OpenDocCN / python-code-anlsLinks
☆44Updated 7 months ago
Alternatives and similar repositories for python-code-anls
Users that are interested in python-code-anls are comparing it to the libraries listed below
Sorting:
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆117Updated last year
- 这是一个DiT-pytorch的代码,主要用于学习DiT结构。☆80Updated last year
- 多模态 MM +Chat 合集☆275Updated last week
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆99Updated 10 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆176Updated last year
- ☆49Updated 3 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆113Updated this week
- DeepSpeed Tutorial☆101Updated last year
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆49Updated last month
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆254Updated this week
- The official GitHub page for the survey paper "Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey". And this paper is unde…☆54Updated 3 weeks ago
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆218Updated last year
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆230Updated last year
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆85Updated 3 weeks ago
- Building a VLM model starts from the basic module.☆17Updated last year
- Precision Search through Multi-Style Inputs☆72Updated last month
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆222Updated last week
- Research Code for Multimodal-Cognition Team in Ant Group☆163Updated last month
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆115Updated 6 months ago
- 童发发的大模型学习之旅☆118Updated 3 weeks ago
- Efficient Multimodal Large Language Models: A Survey☆367Updated 4 months ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆41Updated last month
- [arXiv'25] Official Implementation of "Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning"☆41Updated last month
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆68Updated 11 months ago
- Build a daily academic subscription pipeline! Get daily Arxiv papers and corresponding chatGPT summaries with pre-defined keywords. It is…☆42Updated 2 years ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 11 months ago
- ☆103Updated last year
- ☆79Updated 2 weeks ago