Alibaba-NLP / EcomGPTLinks
An Instruction-tuned Large Language Model for E-commerce
☆246Updated last year
Alternatives and similar repositories for EcomGPT
Users that are interested in EcomGPT are comparing it to the libraries listed below
Sorting:
- SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding☆225Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆495Updated 2 years ago
- A Chinese Open-Domain Dialogue System☆322Updated last year
- [SIGIR 2022] Multi-CPR: A Multi Domain Chinese Dataset for Passage Retrieval☆185Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆236Updated 2 years ago
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆267Updated last year
- 语言模型中文认知能力分析☆236Updated last year
- Baichuan-13B 指令微调☆90Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆654Updated 2 years ago
- chatglm多gpu用deepspeed和☆408Updated 10 months ago
- ☆172Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama 、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆410Updated last year
- ☆63Updated 2 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated 2 years ago
- Llama2开源模型中文版-全方位测评,基于SuperCLUE的OPEN基准 | Llama2 Chinese evaluation with SuperCLUE☆126Updated last year
- 骆驼QA,中文大语言阅读理解模型。☆74Updated 2 years ago
- alpaca中文指令微调数据集☆392Updated 2 years ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆251Updated last year
- text embedding☆146Updated last year
- deepspeed+trainer简单高效实现多卡微调大模型☆125Updated 2 years ago
- 实现了Baichuan-Chat微调,Lora、QLora等各种微调方式,一键运行。☆70Updated last year
- ChatGLM-6B fine-tuning.☆135Updated 2 years ago
- "桃李“: 国际中文教育大模型☆179Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- unified embedding model☆862Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- ☆308Updated 2 years ago
- ☆280Updated last year
- A large-scale language model for scientific domain, trained on redpajama arXiv split☆133Updated last year
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆194Updated 2 years ago