feizc / MLE-LLaMALinks
Multi-language Enhanced LLaMA
☆302Updated 2 years ago
Alternatives and similar repositories for MLE-LLaMA
Users that are interested in MLE-LLaMA are comparing it to the libraries listed below
Sorting:
- ☆460Updated last year
- ☆172Updated 2 years ago
- BiLLa: A Bilingual LLaMA with Enhanced Reasoning Ability☆418Updated 2 years ago
- alpaca中文指令微调数据集☆395Updated 2 years ago
- Chinese large language model base generated through incremental pre-training on Chinese datasets☆238Updated 2 years ago
- Simple implementation of using lora form the peft library to fine-tune the chatglm-6b☆84Updated 2 years ago
- pCLUE: 1000000+多任务提示学习数据集☆500Updated 2 years ago
- CamelBell(驼铃) is be a Chinese Language Tuning project based on LoRA. CamelBell is belongs to Project Luotuo(骆驼), an open sourced Chinese-…☆172Updated last year
- 中文图书语料MD5链接☆216Updated last year
- ☆308Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆535Updated 11 months ago
- 语言模型中文认知能力分析☆237Updated 2 years ago
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- ☆281Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- Luotuo Embedding(骆驼嵌入) is a text embedding model, which developed by 李鲁鲁, 冷子昂, 陈启源, 蒟蒻等.☆267Updated 2 years ago
- Naive Bayes-based Context Extension☆326Updated 9 months ago
- deep learning☆149Updated 4 months ago
- ☆124Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆655Updated 2 years ago
- LLM Zoo collects information of various open- and close-sourced LLMs☆271Updated 2 years ago
- An opensource ChatBot built with ExpertPrompting which achieves 96% of ChatGPT's capability.☆300Updated 2 years ago
- A Multi-Turn Dialogue Corpus based on Alpaca Instructions☆173Updated 2 years ago
- 探索中文instruct数据在ChatGLM, LLaMA上的微调表现☆389Updated 2 years ago
- A Chinese Open-Domain Dialogue System☆323Updated 2 years ago
- [NIPS2023] RRHF & Wombat☆812Updated last year
- A PyTorch-based model pruning toolkit for pre-trained language models☆388Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- ☆219Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆88Updated last year