zxuu / Self-AttentionLinks
Transformer的完整实现。详细构建Encoder、Decoder、Self-attention。以实际例子进行展示,有完整的输入、训练、预测过程。可用于学习理解self-attention和Transformer
☆91Updated 3 months ago
Alternatives and similar repositories for Self-Attention
Users that are interested in Self-Attention are comparing it to the libraries listed below
Sorting:
- modern AI for beginners☆146Updated last month
- LLM大模型(重点)以及搜广推等 AI 算法中手写的面试题,(非 LeetCode),比如 Self-Attention, AUC等,一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点☆309Updated 6 months ago
- 这里用来存储做人工智能项目的代码和参加数据挖掘比赛的代码☆100Updated 4 months ago
- Huggingface transformers的中文文档☆259Updated last year
- TinyRAG☆314Updated 2 weeks ago
- 大模型技术栈一览☆107Updated 9 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆122Updated 8 months ago
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆450Updated 3 months ago
- MindSpore online courses: Step into LLM☆473Updated 2 weeks ago
- 关于Transformer模型的最简洁pytorch实现,包含详细注释☆205Updated last year
- 一个很小很小的RAG系统☆261Updated 2 months ago
- 大模型/LLM推理和部署理论与实践☆291Updated 4 months ago
- 解锁HuggingFace生态的百般用法☆93Updated 7 months ago
- 和李沐一起读论文☆202Updated last month
- Learning LLM Implementaion and Theory for Practical Landing☆169Updated 6 months ago
- 一些 LLM 方面的从零复现笔记☆205Updated 2 months ago
- 大模型基础学习和面试八股文☆135Updated last year
- https://hnlp.boyuai.com☆93Updated 9 months ago
- Transformer是谷歌在17年发表的Attention Is All You Need 中使用的模型,经过这些年的大量的工业使用和论文验证,在深度学习领域已经占据重要地位。Bert就是从Transformer中衍生出来的语言模型。我会以中文翻译英文为例,来解释Tran…☆267Updated last year
- 从零实现一个小参数量中文大语言模型。☆731Updated 10 months ago
- 《解构大语言模型:从线性回归到通用人工智能》配套代码☆214Updated 6 months ago
- 人工智能培训课件资源☆103Updated this week
- 包含程序员面试大厂面试题和面试经验☆141Updated last month
- 看图学大模型☆315Updated 11 months ago
- a chinese tutorial of git☆155Updated last year
- DeepSpeed Tutorial☆98Updated 11 months ago
- ☆230Updated 2 months ago
- 算法岗面试资料-百面深度学习、百面机 器学习书籍等等☆56Updated 2 years ago
- ☆94Updated 4 months ago
- 本仓库是关于大模型面试中常见面试试题和面试经验的整理。这里收集了各类与大模型相关的面试题目,并提供详细的解答和分析。本仓库由上海交大交影社区维护☆97Updated 10 months ago