dandelionsllm / pandallm
Panda项目是于2023年5月启动的开源海外中文大语言模型项目,致力于大模型时代探索整个技术栈,旨在推动中文自然语言处理领域的创新和合作。
☆1,034Updated last year
Alternatives and similar repositories for pandallm:
Users that are interested in pandallm are comparing it to the libraries listed below
- Official codes for ACL 2023 paper "WebCPM: Interactive Web Search for Chinese Long-form Question Answering"☆915Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆983Updated 9 months ago
- Repo for adapting Meta LlaMA2 in Chinese! META最新发布的LlaMA2的汉化版! (完全开源可商用)☆749Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆421Updated last year
- 人工精调的中文对话数据集和一段chatglm的微调代码☆1,166Updated 8 months ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,052Updated 7 months ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆255Updated last month
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆403Updated last year
- Live Training for Open-source Big Models☆509Updated last year
- alpaca中文指令微调数据集☆392Updated last year
- Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]☆1,669Updated last year
- chatglm多gpu用deepspeed和☆403Updated 6 months ago
- 骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷 子昂 @ 商汤科技☆710Updated last year
- chatglm 6b finetuning and alpaca finetuning☆1,541Updated 9 months ago
- 开源SFT数据集整理,随时补充☆475Updated last year
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆234Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆587Updated this week
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆648Updated 9 months ago
- pCLUE: 1000000+多任务提示学习数据集☆476Updated 2 years ago
- FlagEval is an evaluation toolkit for AI large foundation models.☆316Updated 6 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆356Updated last year
- A Chinese Open-Domain Dialogue System☆318Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆398Updated last year
- Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)☆725Updated last year
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,056Updated 5 months ago
- PromptCLUE, 全中文任务支持零样本学习模型☆659Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,667Updated last year
- CMMLU: Measuring massive multitask language understanding in Chinese☆717Updated last month
- 面向中文大模型价值观的评估与对齐研究☆490Updated last year