Liyulingyue / ChatGLM-6B-PromptLinks
A prompt set of ChatGLM-6B
☆15Updated 2 years ago
Alternatives and similar repositories for ChatGLM-6B-Prompt
Users that are interested in ChatGLM-6B-Prompt are comparing it to the libraries listed below
Sorting:
- Another ChatGLM2 implementation for GPTQ quantization☆53Updated 2 years ago
- ggml implementation of the baichuan13b model (adapted from llama.cpp)☆55Updated 2 years ago
- A more efficient GLM implementation!☆54Updated 2 years ago
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆127Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated 2 years ago
- XVERSE-7B: A multilingual large language model developed by XVERSE Technology Inc.☆53Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- Frontend for the MOSS chatbot.☆48Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- 基于MNN-llm的安卓手机部署大语言模型:Qwen1.5-0.5B-Chat☆86Updated last year
- run ChatGLM2-6B in BM1684X☆50Updated last year
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆40Updated 2 years ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- Here is a demo for PDF parser (Including OCR, object detection tools)☆36Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- ☆90Updated 2 years ago
- ☆20Updated 2 years ago
- ☆124Updated last year
- share data, prompt data , pretraining data☆36Updated last year
- OrionStar-Yi-34B-Chat 是一款开源中英文Chat模型,由猎户星空基于Yi-34B开源模型、使用15W+高质量语料微调而成。☆261Updated last year
- GPT2⚡NCNN⚡中文对话⚡x86⚡Android☆81Updated 3 years ago
- “悟道”模型☆130Updated 4 years ago
- llama inference for tencentpretrain☆99Updated 2 years ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- rwkv finetuning☆37Updated last year
- qwen2 and llama3 cpp implementation☆48Updated last year
- Llama2开源模型中文版-全方位测评,基于SuperCLUE的OPEN基准 | Llama2 Chinese evaluation with SuperCLUE☆127Updated 2 years ago
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated 2 years ago
- SUS-Chat: Instruction tuning done right☆49Updated last year