mindspore-lab / mindformersLinks
☆173Updated this week
Alternatives and similar repositories for mindformers
Users that are interested in mindformers are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆338Updated last year
- ☆353Updated last year
- OpenLLMWiki: Docs of OpenLLMAI. Survey, reproduction and domain/task adaptation of open source chatgpt alternatives/implementations. PiXi…☆261Updated 9 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆137Updated 9 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- Yuan 2.0 Large Language Model☆689Updated last year
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated 11 months ago
- 中文书籍收录整理, Collection of Chinese Books☆195Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆210Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆95Updated last week
- FlagEval is an evaluation toolkit for AI large foundation models.☆339Updated 4 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆606Updated 2 weeks ago
- Accelerate inference without tears☆322Updated 5 months ago
- ☆36Updated 8 months ago
- chatglm多gpu用deepspeed和☆411Updated last year
- ☆55Updated this week
- ☆231Updated last year
- 开源SFT数据集整理,随时补充☆539Updated 2 years ago
- 大模型多维度中文对齐评测基准 (ACL 2024)☆409Updated last year
- Alpaca Chinese Dataset -- 中文指令微调数据集☆213Updated 11 months ago
- ☆309Updated 2 years ago
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆219Updated last year
- 更纯粹、更高压缩率的Tokenizer☆482Updated 9 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆470Updated 4 months ago
- 怎么训练一个LLM分词器☆152Updated 2 years ago
- Model Compression for Big Models☆164Updated 2 years ago
- Train a 1B LLM with 1T tokens from scratch by personal☆727Updated 4 months ago