JingYiJun / MOSS_backend
backend for fastnlp MOSS project
☆60Updated 8 months ago
Alternatives and similar repositories for MOSS_backend:
Users that are interested in MOSS_backend are comparing it to the libraries listed below
- Frontend for the MOSS chatbot.☆48Updated 9 months ago
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated last year
- Moss Vortex is a lightweight and high-performance deployment and inference backend engineered specifically for MOSS 003, providing a weal…☆37Updated last year
- GAOGAO-Bench-Updates is a supplement to the GAOKAO-Bench, a dataset to evaluate large language models.☆26Updated 2 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated last year
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Updated last year
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆38Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated last year
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆126Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆59Updated 5 months ago
- rwkv finetuning☆36Updated 11 months ago
- 用于微调LLM的中文指令数据集☆27Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- ☆83Updated last year
- ☆103Updated 3 months ago
- self-host ChatGLM-6B API made with fastapi☆78Updated 2 years ago
- 【丫丫】是以Moss作为基座模型,使用LoRA技术进行指令微调的尝试。由黄泓森,陈启源 @ 华中师范大学 主要完成。同时他也是【骆驼】开源中文大模型的一个子项目。☆30Updated last year
- moss chat finetuning☆50Updated 11 months ago
- Light local website for displaying performances from different chat models.☆85Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆90Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- deep learning☆150Updated 2 weeks ago
- Bidirectional Autoregressive Talker from Generative Pre-trained Transformer☆38Updated last year
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆145Updated 9 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago
- make LLM easier to use☆59Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated 11 months ago