pleisto / yuren-13b
Yuren 13B is an information synthesis large language model that has been continuously trained based on Llama 2 13B, which builds upon the data-centric work of Pleisto.
☆14Updated last year
Related projects ⓘ
Alternatives and complementary repositories for yuren-13b
- 基于baichuan-7b的开源多模态大语言模型☆72Updated 11 months ago
- Perform crosstalk with Qian Yu☆47Updated last year
- Gaokao Benchmark for AI☆105Updated 2 years ago
- ☆92Updated 6 months ago
- chatglm_rlhf_finetuning☆27Updated last year
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就 能自动化生产NLP模型。☆89Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆114Updated last year
- 全球首个StableVicuna中文优化版。☆65Updated last year
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- Light local website for displaying performances from different chat models.☆85Updated last year
- Just for debug☆56Updated 9 months ago
- deep learning☆149Updated 5 months ago
- zero零训练llm调参☆30Updated last year
- moss chat finetuning☆50Updated 6 months ago
- GPT+神器,简单实用的一站式AGI架构,内置本地化,LLM模型,agent,矢量数据库,智能链chain☆48Updated last year
- The paddle implementation of meta's LLaMA.☆44Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- AGI模块库架构图☆75Updated last year
- 我们是第一个完全可商用的角色大模型。☆36Updated 3 months ago
- SUS-Chat: Instruction tuning done right☆47Updated 10 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 7 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆163Updated last year
- 文本去重☆67Updated 5 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- Evaluation for AI apps and agent☆35Updated 10 months ago
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆55Updated last month
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- A more efficient GLM implementation!☆55Updated last year