WangRongsheng / Aurora
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
☆260Updated 10 months ago
Alternatives and similar repositories for Aurora:
Users that are interested in Aurora are comparing it to the libraries listed below
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆420Updated last year
- ☆225Updated 10 months ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆235Updated last year
- alpaca中文指令微调数据集☆392Updated last year
- ☆217Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆212Updated last year
- deep learning☆150Updated 2 weeks ago
- Light local website for displaying performances from different chat models.☆85Updated last year
- Imitate OpenAI with Local Models☆88Updated 6 months ago
- ☆172Updated last year
- ☆128Updated last year
- 文本去重☆69Updated 10 months ago
- ☆192Updated last month
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆440Updated 5 months ago
- 语言模型中文认知能力分析☆235Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆408Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆327Updated 8 months ago
- ☆279Updated 10 months ago
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- ☆160Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆264Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆249Updated last year
- ☆306Updated last year
- ☆166Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated 3 months ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆94Updated last year
- ☆311Updated 9 months ago
- 中文图书语料MD5链接☆217Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆367Updated 7 months ago
- pCLUE: 1000000+多任务提示学习数据集☆485Updated 2 years ago