ymcui / alpaca.cpp
☆50Updated this week
Related projects: ⓘ
- Gaokao Benchmark for AI☆104Updated 2 years ago
- moss chat finetuning☆50Updated 4 months ago
- 零样本学习测评基准,中文版☆54Updated 3 years ago
- 中文大语言模型评测第二期☆68Updated 10 months ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- MOSS 003 WebSearchTool: A simple but reliable implementation☆45Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆45Updated last year
- ☆172Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- llama inference for tencentpretrain☆95Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆58Updated last year
- chatglm_rlhf_finetuning☆27Updated 11 months ago
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就能自动化生产NLP模型。☆87Updated last year
- The multilingual variant of GLM, a general language model trained with autoregressive blank infilling objective☆62Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆43Updated 5 months ago
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- A more efficient GLM implementation!☆54Updated last year
- 大规模中文语料☆34Updated 4 years ago
- 时间抽取、解析、标准化工具☆48Updated last year
- Summarize all open source Large Languages Models and low-cost replication methods for Chatgpt.☆134Updated last year
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆86Updated 5 months ago
- Silk Road will be the dataset zoo for Luotuo(骆驼). Luotuo is an open sourced Chinese-LLM project founded by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子…☆38Updated 10 months ago
- LLM with LuXun (鲁迅) style☆75Updated last year
- 骆驼QA,中文大语言阅读理解模型。☆71Updated last year
- 中文大语言模型评测第一期☆105Updated 10 months ago
- Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding☆47Updated last year
- NLU & NLG (zero-shot) depend on mengzi-t5-base-mt pretrained model☆75Updated last year
- 实现一种多Lora权值集成切换+Zero-Finetune零微调增强的跨模型技术方案,LLM-Base+LLM-X+Alpaca,初期,LLM-Base为Chatglm6B底座模型,LLM-X是LLAMA增强模型。该方案简易高效,目标是使此类语言模型能够低能耗广泛部署,并最…☆118Updated last year
- GLM (General Language Model)☆24Updated 2 years ago