JingYiJun / MOSS_backendLinks
backend for fastnlp MOSS project
☆59Updated last year
Alternatives and similar repositories for MOSS_backend
Users that are interested in MOSS_backend are comparing it to the libraries listed below
Sorting:
- Frontend for the MOSS chatbot.☆48Updated last year
- Moss Vortex is a lightweight and high-performance deployment and inference backend engineered specifically for MOSS 003, providing a weal…☆37Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆112Updated 2 years ago
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆145Updated last year
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆126Updated 2 years ago
- deep learning☆148Updated 3 months ago
- “百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced l…☆318Updated 8 months ago
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆237Updated 2 years ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆97Updated last year
- LingoWhale-8B: Open Bilingual LLMs | 开源双语预训练大模型☆140Updated last year
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆173Updated last year
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- llama inference for tencentpretrain☆98Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆263Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 👋 欢迎来到 ChatGLM 创意世界!你可以使用修订和续写的功能来生成创意内容!☆244Updated last year
- self-host ChatGLM-6B API made with fastapi☆79Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- 中文通用大模型开放域多轮测评基准 | An Open Domain Benchmark for Foundation Models in Chinese☆79Updated last year
- ☆84Updated last year
- alpaca中文指令微调数据集☆393Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated 2 years ago