KaihuaTang / LLM-TP-Inference-on-910BLinks
本项目提供了基于910B的huggingface LLM模型的Tensor Parallel(TP)部署教程,同时也可以作为一份极简的TP学习代码。
☆30Updated last year
Alternatives and similar repositories for LLM-TP-Inference-on-910B
Users that are interested in LLM-TP-Inference-on-910B are comparing it to the libraries listed below
Sorting:
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆27Updated 10 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- ☆74Updated 7 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆198Updated 5 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆82Updated this week
- ☆176Updated last week
- Build a daily academic subscription pipeline! Get daily Arxiv papers and corresponding chatGPT summaries with pre-defined keywords. It is…☆47Updated 2 years ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 8 months ago
- SFT+RL boosts multimodal reasoning☆41Updated 6 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆127Updated last year
- ☆41Updated 5 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆183Updated 2 years ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆151Updated 2 months ago
- ☆111Updated 6 months ago
- [SCIS] MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images☆44Updated last month
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆73Updated 2 months ago
- ☆28Updated last year
- MLLM @ Game☆15Updated 7 months ago
- Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)☆18Updated last year
- ☆90Updated last year
- ☆107Updated 11 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆65Updated 7 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆135Updated 4 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆72Updated last month
- The official repository for the Scientific Paper Idea Proposer (SciPIP)☆66Updated 10 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆88Updated 4 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 10 months ago
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning☆169Updated 2 months ago
- ☆37Updated last year
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆169Updated last month