2ertwo / LLaMa3-Numpy-trainableLinks
用Numpy复现可训练的LLaMa3
☆34Updated last year
Alternatives and similar repositories for LLaMa3-Numpy-trainable
Users that are interested in LLaMa3-Numpy-trainable are comparing it to the libraries listed below
Sorting:
- everything about llm & aigc☆103Updated last week
- 从零到一实现一个 miniLLM~(动手学习LLM)☆76Updated last year
- 从0到1构建一个MiniLLM (pretrain+sft+dpo实践中)☆482Updated 6 months ago
- 一些 LLM 方面的从零复现笔记☆221Updated 5 months ago
- 看图学大模型☆320Updated last year
- TinyRAG☆345Updated 3 months ago
- Retriever-0.1B☆95Updated last year
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆158Updated 11 months ago
- 大模型/LLM推理和部署理论与实践☆345Updated 2 months ago
- 大模型技术栈一览☆113Updated last year
- 解锁HuggingFace生态的百般用法☆93Updated 9 months ago
- ☆76Updated 4 months ago
- ☆33Updated last year
- LLM101n: Let's build a Storyteller 中文版☆132Updated last year
- pretrain a wiki llm using transformers☆51Updated last year
- 从0开始,将chatgpt的技术路线跑一遍。☆259Updated last year
- ☆77Updated last year
- A simple and trans-platform rag framework and tutorial☆215Updated 3 weeks ago
- 通过带领大家解读Transformer模型来加深对模型的理解☆213Updated 4 months ago
- AM (Advanced Mathematics) Chat is a large language model that integrates advanced mathematical knowledge, exercises in higher mathematics…☆211Updated last year
- personal chatgpt☆385Updated 9 months ago
- 大语言模型应用:RAG、NL2SQL、聊天机器人、预训练、MOE混合专家模型、微调训练、强化学习、天池数据竞赛☆69Updated 7 months ago
- ☆300Updated 5 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆739Updated 5 months ago
- Transformer的完整实现。详细构建Encoder、Decoder、Self-attention。以实际例子进行展示,有完整的输入、训练、预测过程。可用于学习 理解self-attention和Transformer☆100Updated 5 months ago
- Huggingface transformers的中文文档☆271Updated last year
- 模型压缩的小白入门教程,PDF下载地址 https://github.com/datawhalechina/awesome-compression/releases☆327Updated 3 months ago
- ☆115Updated 10 months ago
- 使用单个24G显卡,从0开始训练LLM☆55Updated 2 months ago
- ☆102Updated 7 months ago