Tele-AI / TeleChat2Links
星辰语义大模型TeleChat2是由中国电信人工智能研究院研发训练的大语言模型,是首个完全国产算力训练并开源的千亿参数模型
☆266Updated 6 months ago
Alternatives and similar repositories for TeleChat2
Users that are interested in TeleChat2 are comparing it to the libraries listed below
Sorting:
- ☆242Updated 11 months ago
- ☆341Updated 3 months ago
- hf-mirror-cli 使用国内镜像,无需配置开箱即用,快速下载hugingface上的模型☆150Updated 11 months ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆217Updated last year
- Llama3-Chinese是以Meta-Llama-3-8B为底座,使用 DORA + LORA+ 的训练方法,在50w高质量中文多轮SFT数据 + 10w英文多轮SFT数据 + 2000单轮自我认知数据训练而来的大模型。☆295Updated last year
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆299Updated 6 months ago
- 😜 表情包视觉数据集,使用glm-4v、step-1v的图像解析能力标注。☆145Updated last year
- 利用免费的大模型api来结合你的私域数据来生成sft训练数据(妥妥白嫖)支持llamafactory等工具的训练数据格式synthetic data☆194Updated last year
- ☆457Updated 2 years ago
- ☆113Updated 3 months ago
- ☆214Updated 11 months ago
- ☆39Updated last year
- 【逐条处理完成】人为审核+修改每一条的弱智吧精选问题QA数据集☆242Updated 9 months ago
- ☆175Updated last year
- ☆187Updated 11 months ago
- 基于《西游记》原文、白话文、ChatGPT生成数据制作的,以InternLM2微调的角色扮演多LLM聊天室。 本项目将介绍关于角色扮演类 LLM 的一切,从数据获取、数据处理,到使用 XTuner 微调并部署至 OpenXLab,再到使用 LMDeploy 部署,以 op…☆106Updated last year
- Phi3 中文后训练模型仓库☆324Updated last year
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆78Updated last year
- Yuan 2.0 Large Language Model☆690Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆609Updated last year
- A Multi-modal RAG Project with Dataset from Honor of Kings, one of the most popular smart phone games in China☆72Updated last year
- GLM Series Edge Models☆156Updated 7 months ago
- Qwen DianJin: LLMs for the Financial Industry by Alibaba Cloud(通义点金:阿里云金融大模型)☆418Updated last week
- ☆235Updated last year
- ☆74Updated last year
- ☆348Updated last year
- YuLan: An Open-Source Large Language Model☆633Updated last year
- ☆135Updated 11 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆786Updated 9 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆199Updated 6 months ago