SaaRaaS-1300 / InternLM2_horowagLinks
🍏专门为 2024 书生·浦语大模型挑战赛 (春季赛) 准备的 Repo🍎收录了赫萝相关的微调源码
☆11Updated last year
Alternatives and similar repositories for InternLM2_horowag
Users that are interested in InternLM2_horowag are comparing it to the libraries listed below
Sorting:
- Music large model based on InternLM2-chat.☆22Updated 9 months ago
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆41Updated 2 years ago
- ☆102Updated 6 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆72Updated last year
- Deploying LLMs offline on the NVIDIA Jetson platform marks the dawn of a new era in embodied intelligence, where devices can function ind…☆103Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆50Updated 2 years ago
- 基于《西游记》原文、白话文、ChatGPT生成数据制作的,以InternLM2微调的角色扮演多LLM聊天室。 本项目将介绍关于角色扮演类 LLM 的一切,从数据获取、数 据处理,到使用 XTuner 微调并部署至 OpenXLab,再到使用 LMDeploy 部署,以 op…☆103Updated last year
- 🔨🔨🔨Tool for making model training data set☆20Updated 11 months ago
- ☆57Updated last year
- Built on the robust XTuner backend framework, XTuner Chat GUI offers a user-friendly platform for quick and efficient local model inferen…☆13Updated last year
- Unveiling Super Experts in Mixture-of-Experts Large Language Models☆27Updated last week
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆101Updated 11 months ago
- ☆39Updated 11 months ago
- 💡💡💡awesome compute vision app in gradio☆54Updated last year
- 这是一个不基于任何框架实现的从0到1的VLM finetune(包括Pre-train和SFT)☆32Updated last month
- Deploy RT-EDTR with onnx from paddlepaddle framwork and graph cut☆31Updated 2 years ago
- ☆67Updated last year
- An Android Application for GLCC☆11Updated 3 years ago
- 基于InterLM的《黑神话:悟空》AI小助手,了解更多背后的故事--在更新视频中☆32Updated 8 months ago
- ☆28Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- MLLM @ Game☆14Updated 4 months ago
- 第九届中国软件杯视频全量分析“一等奖”&第十届中国软件杯A2百度paddlepaddle跟踪赛道“二等奖”☆10Updated 2 years ago
- Pretrain、decay、SFT a CodeLLM from scratch 🧙♂️☆39Updated last year
- Our 2nd-gen LMM☆34Updated last year
- A unified evaluation library for multiple machine learning libraries☆266Updated last year