Tele-AI / TelechatLinks
☆1,855Updated last year
Alternatives and similar repositories for Telechat
Users that are interested in Telechat are comparing it to the libraries listed below
Sorting:
- ☆39Updated last year
- Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sour…☆1,473Updated 10 months ago
- 星辰语义大模型TeleChat2是由中国电信人工智能研究院研发训练的大语言模型,是首个完全国产算力训练并开源的千亿参数模型☆266Updated 6 months ago
- CMMLU: Measuring massive multitask language understanding in Chinese☆802Updated last year
- 中文 对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。☆1,667Updated last year
- Yuan 2.0 Large Language Model☆690Updated last year
- Netease Youdao's open-source embedding and reranker models for RAG products.☆1,861Updated 4 months ago
- 通义千问VLLM推理部署DEMO☆637Updated last year
- 轩辕:度小满中文金融对话大模型☆1,296Updated last year
- 自然语言生成SQL后端服务☆113Updated 5 months ago
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,195Updated 8 months ago
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆460Updated 2 years ago
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,777Updated 2 years ago
- 开源SFT数据集整理,随时补充☆569Updated 2 years ago
- Chat-甄嬛是利用《甄嬛传》剧本中所有关于甄嬛的台词和语句,基于ChatGLM2进行LoRA微调得到的模仿甄嬛语气的聊天语言模型。☆782Updated 8 months ago
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,021Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆786Updated 9 months ago
- FinGLM: 致力于构建一个开放的、公益的、持久的金融大模型项目,利用开源开放来促进「AI+金融」。☆2,142Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆609Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,805Updated 6 months ago
- ☆18Updated 6 months ago
- ☆362Updated last year
- A 13B large language model developed by Baichuan Intelligent Technology☆2,950Updated 2 years ago
- unified embedding model☆878Updated 2 years ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆490Updated 8 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆358Updated 2 years ago
- MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集。对标chatGPT训练的40T数据。MNBVC数据集不但包括主流文化,也包括各个小众文化甚至火星文的数据。MNBVC数据集包括新闻、作文、小说、书籍、杂志…☆4,103Updated 3 weeks ago
- A series of large language models developed by Baichuan Intelligent Technology☆4,118Updated last year
- 中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3☆1,962Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,091Updated last year