iflytek / VLELinks
VLE: Vision-Language Encoder (VLE: 视觉-语言多模态预训练模型)
☆194Updated 2 years ago
Alternatives and similar repositories for VLE
Users that are interested in VLE are comparing it to the libraries listed below
Sorting:
- transformers结构的中文OFA模型☆137Updated 2 years ago
- A Chinese Open-Domain Dialogue System☆324Updated 2 years ago
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆377Updated last year
- llama inference for tencentpretrain☆99Updated 2 years ago
- 中文CLIP预训练模型☆417Updated 2 years ago
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year
- 多模态中文LLaMA&Alpaca大语言模型(VisualCLA)☆451Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆169Updated 2 years ago
- deep learning☆149Updated 4 months ago
- 📔 对Chinese-LLaMA-Alpaca进行使 用说明和核心代码注解☆50Updated 2 years ago
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- Baichuan-13B 指令微调☆90Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆254Updated 2 years ago
- 语言模型中文认知能力分析☆237Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆238Updated 2 years ago
- ☆308Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆172Updated last year
- 中文聊天小模型,用t5 base在大量数据上有监督。☆101Updated last year
- SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding☆226Updated last year
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆389Updated 2 years ago
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆267Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated last year
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆195Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated 2 years ago
- pCLUE: 1000000+多任务提示学习数据集☆500Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated 2 years ago
- Llama2开源模型中文版-全方位测评,基于SuperCLUE的OPEN基准 | Llama2 Chinese evaluation with SuperCLUE☆127Updated 2 years ago
- alpaca中文指令微调数据集☆395Updated 2 years ago
- chatglm多gpu用deepspeed和☆412Updated last year