SaaRaaS-1300 / InternLM2_horowagLinks
🍏专门为 2024 书生·浦语大模型挑战赛 (春季赛) 准备的 Repo🍎收录了赫萝相关的微调源码
☆11Updated 11 months ago
Alternatives and similar repositories for InternLM2_horowag
Users that are interested in InternLM2_horowag are comparing it to the libraries listed below
Sorting:
- Music large model based on InternLM2-chat.☆22Updated 8 months ago
- Built on the robust XTuner backend framework, XTuner Chat GUI offers a user-friendly platform for quick and efficient local model inferen…☆13Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆70Updated last year
- 基于《西游记》原文、白话文、ChatGPT生成数据制作的,以InternLM2微调的角色扮演多LLM聊天室。 本项目将介绍关于角色扮演类 LLM 的一切,从数据获取、数据处理,到使用 XTuner 微调并部署至 OpenXLab,再到使用 LMDeploy 部署,以 op…☆103Updated last year
- Xtuner Factory☆33Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☆57Updated last year
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆37Updated last year
- ☆65Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆99Updated 6 months ago
- Code for "An Empirical Study of Retrieval Augmented Generation with Chain-of-Thought"☆16Updated last year
- ☆28Updated last year
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆25Updated 6 months ago
- 💡💡💡awesome compute vision app in gradio☆54Updated last year
- MLLM @ Game☆14Updated 4 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- GLM Series Edge Models☆149Updated 3 months ago
- Our 2nd-gen LMM☆34Updated last year
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆100Updated last year
- run ChatGLM2-6B in BM1684X☆50Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- LLM Tokenizer with BPE algorithm☆34Updated last year
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆100Updated 10 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆181Updated last month
- Awesome Colab Projects Collection☆27Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- ☆39Updated 10 months ago
- 调用大模型已经是如今做 ai 项目习以为常的工作的,但是大模型的输出很多时候是不可控的,我们又需要使用大模型去做各种下游任务,实现可控可解析的输出。我们探索了一种和 python 开发可以紧密合作的开发方法。☆26Updated last year