StarRing2022 / MiniRWKV-4Links
实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实现更好的智能效果。
☆40Updated 2 years ago
Alternatives and similar repositories for MiniRWKV-4
Users that are interested in MiniRWKV-4 are comparing it to the libraries listed below
Sorting:
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效 ,目标是使此类语言模型能够低能耗广泛部署,并最…☆116Updated 2 years ago
- ✅4g GPU可用 | 简易实现ChatGLM单机调用多个计算设备(GPU、CPU)进行推理☆34Updated 2 years ago
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆127Updated 2 years ago
- Just for debug☆56Updated last year
- deep learning☆148Updated 6 months ago
- 👋 欢迎来到 ChatGLM 创意世界!你可以使用修订和续写的功能来生成创意内容!☆247Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆141Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆165Updated 2 years ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- rwkv finetuning☆37Updated last year
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆171Updated last year
- 骆驼QA,中文大语言阅读理解模型。☆75Updated 2 years ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- OrionStar-Yi-34B-Chat 是一款开源中英文Chat模型,由猎户星空基于Yi-34B开源模型、使用15W+高质量语料微调而成。☆261Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- 使用甄嬛语料微调的chatglm☆87Updated 2 years ago
- Humanable Chat Generative-model Fine-tuning | LLM微调☆207Updated 2 years ago
- The Silk Magic Book will record the Magic Prompts on some very Large LLMs. The Silk Magic Book belongs to the project Luotuo(骆驼), which c…☆58Updated 2 years ago
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated 2 years ago
- SUS-Chat: Instruction tuning done right☆49Updated last year
- llama inference for tencentpretrain☆99Updated 2 years ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆265Updated last year
- 打造人人都会的NLP,开源不易,记得star哦☆101Updated 2 years ago
- qwen models finetuning☆105Updated 8 months ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆90Updated 2 years ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- self-host ChatGLM-6B API made with fastapi☆79Updated 2 years ago
- A QQ Chatbot based on RWKV (W.I.P.)☆79Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆53Updated 2 years ago
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆29Updated last year