Liyulingyue / ChatGLM-6B-Prompt
A prompt set of ChatGLM-6B
☆14Updated last year
Related projects ⓘ
Alternatives and complementary repositories for ChatGLM-6B-Prompt
- Large-scale exact string matching tool☆15Updated last year
- zero零训练llm调参☆30Updated last year
- A more efficient GLM implementation!☆55Updated last year
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆25Updated 5 months ago
- qwen2 and llama3 cpp implementation☆34Updated 5 months ago
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆36Updated last year
- run ChatGLM2-6B in BM1684X☆48Updated 8 months ago
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- GPT+神器,简单实用的一站式AGI架构,内置本地化,LLM模型,agent,矢量数据库,智能链chain☆48Updated last year
- 百度QA100万数据集☆49Updated 11 months ago
- 将零一万物 YI-34B 模型 API 转换为各种使用 OpenAI API 的开源软件支持的格式,无需修改开源软件配置或代码。☆11Updated 9 months ago
- 研究GOT-OCR-项目落地加速,不限语言☆48Updated 2 weeks ago
- Its an open source LLM based on MOE Structure.☆57Updated 4 months ago
- 想要从零开始训练一个中文的mini大语言模型,可以进行基本的对话,模型大小根据手头的机器决定☆50Updated 2 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆16Updated 3 weeks ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 6 months ago
- run chatglm3-6b in BM1684X☆39Updated 8 months ago
- aigc evals☆10Updated 11 months ago
- 基于ChatGLM2带的openai_api.py修改支持ChatGLM3。☆20Updated last year
- PaddleClas ShiTu Image Manager PP-ShiTu 库管理工具☆14Updated last year
- 模型 llava-Qwen2-7B-Instruct-Chinese-CLIP 增强中文文字识别能力和表情包内涵识别能力,接近gpt4o、claude-3.5-sonnet的识别水平!☆13Updated 3 months ago
- 视频理解:千问视频多模态模型 & Dify☆24Updated 2 months ago
- rwkv finetuning☆35Updated 6 months ago
- AGI模块库架构图☆75Updated last year
- share data, prompt data , pretraining data☆35Updated 11 months ago
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆17Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- The paddle implementation of meta's LLaMA.☆44Updated last year