THUDM / GLMLinks
GLM (General Language Model)
☆3,240Updated last year
Alternatives and similar repositories for GLM
Users that are interested in GLM are comparing it to the libraries listed below
Sorting:
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,688Updated 11 months ago
- Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型☆4,155Updated 10 months ago
- 基于ChatGLM-6B + LoRA的Fintune方案☆3,766Updated last year
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,680Updated last year
- Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca☆4,153Updated 2 months ago
- 骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技☆3,637Updated last year
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,170Updated 8 months ago
- A series of large language models developed by Baichuan Intelligent Technology☆4,124Updated 7 months ago
- WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)☆1,596Updated 2 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,751Updated last year
- A 13B large language model developed by Baichuan Intelligent Technology☆2,971Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,042Updated last year
- TigerBot: A multi-language multi-task LLM☆2,254Updated 5 months ago
- Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调☆3,706Updated last year
- Chinese-LLaMA 1&2、Chinese-Falcon 基础模型;ChatFlow中文对话模型;中文OpenLLaMA模型;NLP预训练/指令微调数据集☆3,057Updated last year
- ChatYuan: Large Language Model for Dialogue in Chinese and English☆1,887Updated 2 years ago
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,741Updated last year
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆3,689Updated this week
- 基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等☆2,749Updated last year
- chatglm 6b finetuning and alpaca finetuning