glide-the / InterpretationoDreamsLinks
使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略
☆29Updated 2 months ago
Alternatives and similar repositories for InterpretationoDreams
Users that are interested in InterpretationoDreams are comparing it to the libraries listed below
Sorting:
- Imitate OpenAI with Local Models☆89Updated last year
- Alpaca Chinese Dataset -- 中文指令微调数据集☆213Updated 11 months ago
- ☆231Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆140Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆47Updated 8 months ago
- GLM Series Edge Models☆149Updated 3 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 我们是第一个完全可商用的角色大模型。☆40Updated last year
- 基于《西游记》原文、白话文、ChatGPT生成数据制作的,以InternLM2微调的角色扮演多LLM聊天室。 本项目将介绍关于角色扮演类 LLM 的一切,从数据获取、数据处理,到使用 XTuner 微调并部署至 OpenXLab,再到使用 LMDeploy 部署,以 op…☆103Updated last year
- 利用免费的大模型api来结合你的私域数据来生成sft训练数据(妥妥白嫖)支持llamafactory等工具的训练数据格式synthetic data☆182Updated 9 months ago
- 探索 LLM 在法律行业的应用潜力☆91Updated 9 months ago
- 顾名思义:手搓的RAG☆127Updated last year
- Just for debug☆56Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆179Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆137Updated 9 months ago
- deep learning☆149Updated 4 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- 文本去重☆76Updated last year
- The plan which extend ChatHaruhi into Zero-shot Roleplaying model☆108Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- 中文原生检索增强生成测评基准☆122Updated last year
- 大语言模型训练和服务调研☆36Updated 2 years ago
- ☆106Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆266Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆91Updated 2 years ago
- deepseek思维树模式实现☆18Updated last month
- gpt_server是一个用于生产级部署LLMs、Embedding、Reranker、ASR、TTS、文生图、图片编辑和文生视频的开源框架。☆208Updated this week
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆122Updated 6 months ago