datawhalechina / awesome-compressionLinks
模型压缩的小白入门教程,PDF下载地址 https://github.com/datawhalechina/awesome-compression/releases
☆328Updated 3 months ago
Alternatives and similar repositories for awesome-compression
Users that are interested in awesome-compression are comparing it to the libraries listed below
Sorting:
- 深度学习系统笔记,包含深度学习数学基础知识、神经网络基础部件详解、深度学习炼丹策略、模型压缩算法详解。☆494Updated 4 months ago
- 大模型/LLM推理和部署理论与实践☆345Updated 2 months ago
- MindSpore online courses: Step into LLM☆476Updated last month
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆827Updated 3 weeks ago
- ☆297Updated 11 months ago
- 校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。☆426Updated 3 months ago
- 通过带领大家解读Transformer模型来加深对模型的理解☆215Updated 4 months ago
- 解锁HuggingFace生态的百般用法☆93Updated 9 months ago
- 看图学大模型☆320Updated last year
- TinyRAG☆345Updated 3 months ago
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- LLM全栈优质资源汇总☆634Updated 2 months ago
- a chinese tutorial of git☆156Updated last year
- 一个很小很小的RAG系统☆298Updated 5 months ago
- 关于Transformer模型的最简洁pytorch实现,包含详细注释☆214Updated last year
- 大模型技术栈一览☆113Updated last year
- 高性能计算课程&CUDA编程实例&深度学习推理框架☆59Updated 2 years ago
- yolo master 本课程主要对yolo系列模型进行介绍,包括各版本模型的结构,进行的改进等,旨在帮助学习者们可以了解和掌握主要yolo模型的发展脉络,以期在各自的应用领域可以进一步创新并在自己的任务上达到较好的效果。☆223Updated 3 months ago
- A light llama-like llm inference framework based on the triton kernel.☆153Updated 2 weeks ago
- https://hcv.boyuai.com☆102Updated 9 months ago
- ☆300Updated 5 months ago
- LLM大模型(重点)以及搜广推等 AI 算法中手写的面试题,(非 LeetCode),比如 Self-Attention, AUC等,一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点☆375Updated 9 months ago
- ☆319Updated 3 months ago
- 《动手学深度学习》的MindSpore实现。供MindSpore学习者配合李沐老师课程使用。☆119Updated 2 years ago
- LLM/MLOps/LLMOps☆116Updated 4 months ago
- A simple and trans-platform rag framework and tutorial☆215Updated 3 weeks ago
- pytorch复现transformer☆86Updated last year
- wow-fullstack,令人惊叹的全栈开发教程☆209Updated 3 months ago
- Inference code for LLaMA models☆123Updated 2 years ago
- 尝试自己从头写一个LLM,参考llama和nanogpt☆67Updated last year