SUSTech-IDEA / SUS-ChatLinks
SUS-Chat: Instruction tuning done right
☆48Updated last year
Alternatives and similar repositories for SUS-Chat
Users that are interested in SUS-Chat are comparing it to the libraries listed below
Sorting:
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆263Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 7 months ago
- ☆230Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆443Updated 9 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Imitate OpenAI with Local Models☆87Updated 10 months ago
- ☆144Updated last year
- GLM Series Edge Models☆144Updated last month
- zero零训练llm调参☆31Updated last year
- Generate multi-round conversation roleplay data based on self-instruct and evol-instruct.☆129Updated 6 months ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆94Updated 2 years ago
- 文本去重☆74Updated last year
- ☆324Updated last year
- ☆105Updated last year
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆29Updated 2 months ago
- ☆172Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 8 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 中文原生检索增强生成测评基准☆119Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆174Updated last year
- Just for debug☆56Updated last year
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- ☆225Updated last year
- FlagEval is an evaluation toolkit for AI large foundation models.☆338Updated 2 months ago
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- code for piccolo embedding model from SenseTime☆131Updated last year