OpenBMB / ModelCenterLinks
Efficient, Low-Resource, Distributed transformer implementation based on BMTrain
☆266Updated last year
Alternatives and similar repositories for ModelCenter
Users that are interested in ModelCenter are comparing it to the libraries listed below
Sorting:
- ☆281Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆115Updated 2 years ago
- Implementation of Chinese ChatGPT☆289Updated last year
- 中文图书语料MD5链接☆217Updated last year
- 中文 Instruction tuning datasets☆140Updated last year
- Naive Bayes-based Context Extension☆324Updated 11 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆285Updated 2 years ago
- Model Compression for Big Models☆166Updated 2 years ago
- ☆459Updated last year
- ☆84Updated 2 years ago
- ☆128Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆418Updated 3 weeks ago
- ☆180Updated 2 years ago
- Collaborative Training of Large Language Models in an Efficient Way☆414Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆613Updated 2 weeks ago
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆262Updated 11 months ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- MEASURING MASSIVE MULTITASK CHINESE UNDERSTANDING☆89Updated last year
- 对ChatGLM直接使用RLHF提升或降低目标输出概率|Modify ChatGLM output with only RLHF☆196Updated 2 years ago
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Updated last year
- 语言模型中文认知能力分析☆237Updated 2 years ago
- ☆313Updated 2 years ago
- A framework for cleaning Chinese dialog data☆274Updated 4 years ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆400Updated 4 months ago
- ☆172Updated 2 years ago
- ☆164Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆225Updated last year
- pCLUE: 1000000+多任务提示学习数据集☆500Updated 3 years ago
- ☆321Updated last year