zxuu / Self-Attention
Transformer的完整实现。详细构建Encoder、Decoder、Self-attention。以实际例子进行展示,有完整的输入、训练、预测过程。可用于学习理解self-attention和Transformer
☆79Updated last month
Alternatives and similar repositories for Self-Attention
Users that are interested in Self-Attention are comparing it to the libraries listed below
Sorting:
- LLM大模型(重点)以及搜广推等 AI 算法中手写的面试题,(非 LeetCode),比如 Self-Attention, AUC等,一般比 LeetCode 更考察一个人的综合能力,又更贴近业务和基础知识一点☆257Updated 4 months ago
- TinyRAG☆295Updated 3 weeks ago
- 解锁HuggingFace生态的百般用法☆90Updated 5 months ago
- 大模型基础学习和面试八股文☆118Updated last year
- 一些 LLM 方面的从零复现笔记☆192Updated 2 weeks ago
- 关于Transformer模型的最简洁pytorch实现,包含详细注释☆195Updated last year
- 一个很小很小的RAG系统☆223Updated 2 weeks ago
- 大模型/LLM推理和部署理论与实践☆259Updated 2 months ago
- 这里用来存储做人工智能项目的代码和参加数据挖掘比赛的代码☆99Updated 2 months ago
- 使用单个24G显卡,从0开始训练LLM☆53Updated this week
- ☆148Updated 2 weeks ago
- ☆322Updated 3 months ago
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆121Updated 6 months ago
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆61Updated 3 months ago
- ☆22Updated 2 months ago
- 人工智能培训课件资源☆88Updated last week
- 通过带领大家解读Transformer模型来加深对模型的理解☆183Updated 2 months ago
- Huggingface transformers的中文文档☆242Updated last year
- 大模型技术栈一览☆95Updated 7 months ago
- 对llama3进行全参微调、lora微调以及qlora微调。☆195Updated 7 months ago
- ☆70Updated 2 months ago
- 《解构大语言模型:从线性回归到通用人工智能》配套代码☆197Updated 4 months ago
- DeepSpeed Tutorial☆97Updated 9 months ago
- Transformer是谷歌在17年发表的Attention Is All You Need 中使用的模型,经过这些年的大量的工业使用和论文验证,在深度学习领域已经占据重要地位。Bert就是从Transformer中衍生出来的语言模型。我会以中文翻译英文为例,来解释Tran…☆253Updated last year
- modern AI for beginners☆132Updated last month
- Learning LLM Implementaion and Theory for Practical Landing☆154Updated 4 months ago
- 从0开始,将chatgpt的技术路线跑一遍。☆233Updated 8 months ago
- pytorch distribute tutorials☆131Updated last week
- everything about llm & aigc☆61Updated 3 weeks ago
- 我的AI学习笔记。包括b站up主deep_thoughts的PyTorch课程笔记和相关代码;北邮深度学习与数字视频PPT代码。☆34Updated 11 months ago