OpenLMLab / MOSS_VortexLinks
Moss Vortex is a lightweight and high-performance deployment and inference backend engineered specifically for MOSS 003, providing a wealth of features aimed at enhancing performance and functionality, built upon the foundations of MOSEC and Torch.
☆37Updated 2 years ago
Alternatives and similar repositories for MOSS_Vortex
Users that are interested in MOSS_Vortex are comparing it to the libraries listed below
Sorting:
- deep learning☆149Updated 6 months ago
- ☆172Updated 2 years ago
- Light local website for displaying performances from different chat models.☆87Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆239Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Multi-language Enhanced LLaMA☆303Updated 2 years ago
- ☆164Updated 2 years ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- llama inference for tencentpretrain☆99Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就能自动化生产NLP模型。☆93Updated 2 years ago
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated 2 years ago
- alpaca中文指令微调数据集☆395Updated 2 years ago
- 文本去重☆77Updated last year
- moss chat finetuning☆51Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆153Updated 2 years ago
- 中文大语言模型评测第一期☆110Updated 2 years ago
- ☆123Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆95Updated 2 years ago
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆76Updated 3 years ago
- Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on☆99Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- A more efficient GLM implementation!☆54Updated 2 years ago
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- ☆313Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆139Updated 11 months ago