JingYiJun / MOSS_backend
backend for fastnlp MOSS project
☆60Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for MOSS_backend
- Frontend for the MOSS chatbot.☆48Updated 5 months ago
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated last year
- make LLM easier to use☆58Updated last year
- Moss Vortex is a lightweight and high-performance deployment and inference backend engineered specifically for MOSS 003, providing a weal…☆37Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆70Updated last year
- Gaokao Benchmark for AI☆105Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆114Updated last year
- 国内首个全参数训练的法律大模型 HanFei-1.0 (韩非)☆99Updated last year
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆55Updated last month
- 基于中文法律知识的ChatGLM指令微调☆43Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆132Updated 7 months ago
- 在中文开源大模型的基础上进行定制化的微调,拥有自己专属的语言模型。☆44Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- 【丫丫】是以Moss作为基座模型,使用LoRA技术进行指令微调的尝试。由黄泓森,陈启源 @ 华中师范大学 主要完成。同时他也是【骆驼】开源中文大模型的一个子项目。☆30Updated last year
- ☆59Updated last year
- Light local website for displaying performances from different chat models.☆85Updated last year
- 文本去重☆67Updated 5 months ago
- self-host ChatGLM-6B API made with fastapi☆78Updated last year
- Just for debug☆56Updated 9 months ago
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆141Updated 5 months ago
- A unified tokenization tool for Images, Chinese and English.☆150Updated last year
- deep learning☆149Updated 4 months ago
- Imitate OpenAI with Local Models☆85Updated 2 months ago
- 基于baichuan-7b的开源多模态大语言模型☆72Updated 11 months ago
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆17Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆60Updated last year
- llama inference for tencentpretrain☆96Updated last year
- SUS-Chat: Instruction tuning done right☆47Updated 10 months ago