twang2218 / vocab-coverage
语言模型中文认知能力分析
☆236Updated last year
Alternatives and similar repositories for vocab-coverage:
Users that are interested in vocab-coverage are comparing it to the libraries listed below
- pCLUE: 1000000+多任务提示学习数据集☆478Updated 2 years ago
- ☆304Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆653Updated last year
- ☆278Updated 9 months ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆406Updated last year
- 中文图书语料MD5链接☆213Updated last year
- ☆173Updated last year
- ☆159Updated last year
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆247Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆263Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆256Updated 2 months ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆390Updated last year
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆421Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆115Updated 5 months ago
- Baichuan2代码的逐行解析版本,适合小白☆212Updated last year
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆398Updated last year
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- alpaca中文指令微调数据集☆392Updated last year
- chatglm2 6b finetuning and alpaca finetuning☆145Updated 10 months ago
- chatglm多gpu用deepspeed和☆405Updated 7 months ago
- A Chinese Open-Domain Dialogue System☆319Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆169Updated last year
- 怎么训练一个LLM分词器☆140Updated last year
- 文本去重☆68Updated 8 months ago
- Implementation of Chinese ChatGPT☆287Updated last year
- ☆62Updated last year
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆171Updated last year
- text embedding☆144Updated last year
- 开源SFT数据集整理,随时补充☆485Updated last year
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆235Updated last year