ziwang-com / zero-loraLinks
zero零训练llm调参
☆31Updated last year
Alternatives and similar repositories for zero-lora
Users that are interested in zero-lora are comparing it to the libraries listed below
Sorting:
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆139Updated last year
- SUS-Chat: Instruction tuning done right☆48Updated last year
- 全球首个StableVicuna中文优化版。☆64Updated 2 years ago
- Another ChatGLM2 implementation for GPTQ quantization☆54Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆263Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 7 months ago
- AGM阿格姆:AI基因图谱模型,从token-weight权重微粒角度,探索AI模型,GPT\LLM大模型的内在运作机制。☆28Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆90Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- Just for debug☆56Updated last year
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆94Updated 2 years ago
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该 方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆115Updated last year
- Light local website for displaying performances from different chat models.☆87Updated last year
- Imitate OpenAI with Local Models☆87Updated 10 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- SuperCLUE琅琊榜:中文通用大模型匿名对战评价基准☆145Updated last year
- ☆105Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- A more efficient GLM implementation!☆55Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆87Updated 2 years ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆69Updated 2 years ago
- ☆230Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- Open efforts to implement ChatGPT-like models and beyond.☆108Updated 11 months ago
- ☆225Updated last year
- deep learning☆148Updated 2 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year