LC1332 / Luotuo-Fighter
骆驼大乱斗: Massive Game Content Generated by LLM
☆19Updated last year
Alternatives and similar repositories for Luotuo-Fighter:
Users that are interested in Luotuo-Fighter are comparing it to the libraries listed below
- 我们是第一个完全可商用的角色大模型。☆39Updated 7 months ago
- Just for debug☆56Updated last year
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated this week
- SUS-Chat: Instruction tuning done right☆48Updated last year
- zero零训练llm调参☆31Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated last year
- 使用甄嬛语料微调的chatglm☆84Updated last year
- Light local website for displaying performances from different chat models.☆85Updated last year
- 全球首个StableVicuna中文优化版。☆64Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆90Updated last year
- The Silk Magic Book will record the Magic Prompts on some very Large LLMs. The Silk Magic Book belongs to the project Luotuo(骆驼), which c…☆56Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated 8 months ago
- deep learning☆150Updated this week
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated 11 months ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- Perform crosstalk with Qian Yu☆50Updated last year
- The plan which extend ChatHaruhi into Zero-shot Roleplaying model☆104Updated 11 months ago
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆122Updated 2 months ago
- GLM Series Edge Models☆130Updated 3 weeks ago
- 实现Blip2RWKV+QFormer的多模态图文对话大模型,使用Two-Step Cognitive Psychology Prompt方法,仅3B参数的模型便能够出现类人因果思维链。对标MiniGPT-4,ImageBind等图文对话大语言模型,力求以更小的算力和资源实…☆38Updated last year
- Gaokao Benchmark for AI☆108Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆117Updated last year
- ChatGLM-6B-Slim:裁减掉20K图片Token的ChatGLM-6B,完全一样的性能,占用更小的显存。☆126Updated last year
- “悟道”模型☆122Updated 3 years ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- Imitate OpenAI with Local Models☆87Updated 6 months ago
- 骆驼QA,中文 大语言阅读理解模型。☆75Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 10 months ago