mindspore-lab / mindnlpLinks
Easy-to-use and high-performance NLP and LLM framework based on MindSpore, compatible with models and datasets of 🤗Huggingface.
☆885Updated this week
Alternatives and similar repositories for mindnlp
Users that are interested in mindnlp are comparing it to the libraries listed below
Sorting:
- MindSpore online courses: Step into LLM☆477Updated last week
- 大模型/LLM推理和部署理论与实践☆321Updated last month
- 《动手学深度学习》的MindSpore实现。供MindSpore学习 者配合李沐老师课程使用。☆119Updated last year
- 从0开始,将chatgpt的技术路线跑一遍。☆254Updated 11 months ago
- Huggingface transformers的中文文档☆270Updated last year
- TinyRAG☆328Updated 2 months ago
- ☆172Updated this week
- personal chatgpt☆382Updated 8 months ago
- ☆369Updated 6 months ago
- 从零实现一个小参数量中文大语言模型。☆794Updated last year
- 通义千问VLLM推理部署DEMO☆597Updated last year
- 尝试自己从头写一个LLM,参考llama和nanogpt☆64Updated last year
- pytorch distribute tutorials☆150Updated 2 months ago
- LLM&VLM Tutorial☆1,867Updated 3 months ago
- ☆275Updated 4 months ago
- LLM大模型(重点)以及搜广推等 AI 算法中手写的面试题,(非 LeetCode),比如 Self-Attention, AUC等,一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点☆349Updated 8 months ago
- ☆96Updated 2 months ago
- 《EasyOffer》(<大模型面经合集>)是针对LLM宝宝们量身打造的大模型暑期实习Offer指南,主要记录大模型暑期实习和秋招准备的一些常见大厂手撕代码、大厂面经经验、常见大厂思考题等;小白一个,正在学习ing......有问题各位大佬随时指正,希望大家都能拿到心仪Of…☆323Updated 5 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆209Updated 10 months ago
- 大模型技术栈一览☆110Updated 11 months ago
- ☆1,038Updated last month
- 模型压缩的小白入门教程,PDF下载地址 https://github.com/datawhalechina/awesome-compression/releases☆317Updated 2 months ago
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- ☆74Updated 3 months ago
- llm相关内容,包括:基础知识、八股文、面经、经典论文☆176Updated last year
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆467Updated 5 months ago
- 欢迎来到 LLM-Dojo,这里是一个开源大模型学习场所,使用简洁且易阅读的代码构建模型训练框架(支持各种主流模型如Qwen、Llama、GLM等等)、RLHF框架(DPO/CPO/KTO/PPO)等各种功能。👩🎓👨🎓☆848Updated last week
- 基于MindSpore的TinyRAG实现☆17Updated 8 months ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆469Updated 4 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆722Updated 4 months ago