Tele-AI / TelechatLinks
☆1,843Updated 8 months ago
Alternatives and similar repositories for Telechat
Users that are interested in Telechat are comparing it to the libraries listed below
Sorting:
- ☆40Updated 9 months ago
- 通义千问VLLM推理部署DEMO☆589Updated last year
- 中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,566Updated last year
- BlueLM(蓝心大模型): Open large language models developed by vivo AI Lab☆914Updated 6 months ago
- Netease Youdao's open-source embedding and reranker models for RAG products.☆1,800Updated 2 weeks ago
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,404Updated 4 months ago
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,751Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆772Updated 7 months ago
- 星辰语义大模型TeleChat2是由中国电信人工智能研究院研发训练的大语言模型,是首个完全国产算力训练并开源的千亿参数模型☆244Updated 2 months ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,756Updated last year
- Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调☆3,711Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,009Updated last year
- A series of large language models developed by Baichuan Intelligent Technology☆4,120Updated 8 months ago
- 开源SFT数据集整理,随时补充☆529Updated 2 years ago
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆451Updated last year
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,359Updated this week
- ☆15Updated 2 months ago
- Yuan 2.0 Large Language Model☆688Updated last year
- ☆981Updated last week
- FinGLM: 致力于构建一个开放的、公益的、持久的金融大模型项目,利用开源开放来促进「AI+金融」。☆2,027Updated last year
- Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.☆556Updated last year
- TigerBot: A multi-language multi-task LLM☆2,255Updated 6 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆602Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆701Updated 2 months ago
- MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO、GRPO。☆4,004Updated 2 weeks ago
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- A 13B large language model developed by Baichuan Intelligent Technology☆2,971Updated last year
- ☆347Updated last year
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆360Updated last year
- 心理健康大模型 (LLM x Mental Health), Pre & Post-training & Dataset & Evaluation & Depoly & RAG, with InternLM / Qwen / Baichuan / DeepSeek / M…☆1,509Updated 2 months ago