WangHuiNEU / Transformer_KnowlegdeLinks
从底层机理了解Transformer
☆27Updated 3 years ago
Alternatives and similar repositories for Transformer_Knowlegde
Users that are interested in Transformer_Knowlegde are comparing it to the libraries listed below
Sorting:
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆159Updated last year
- A light-weight script for maintaining a LOT of machine learning experiments.☆92Updated 3 years ago
- Rotary Transformer☆1,057Updated 3 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- RoFormer V1 & V2 pytorch☆515Updated 3 years ago
- ☆51Updated 2 years ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆285Updated 2 years ago
- ☆882Updated last year
- 更纯粹、更高压缩率的Tokenizer☆486Updated last year
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆183Updated 2 years ago
- real Transformer TeraFLOPS on various GPUs☆915Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆66Updated 9 months ago
- Rectified Rotary Position Embeddings☆384Updated last year
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆371Updated 2 years ago
- dpo算法实现☆48Updated last year
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆264Updated 2 years ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆354Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- The official repo of INF-34B models trained by INF Technology.☆34Updated last year
- A paper list about diffusion models for natural language processing.☆182Updated 2 years ago
- The Roadmap for LLMs☆86Updated 2 years ago
- pytorch单精度、半精度、混合精度、单卡、多卡(DP / DDP)、FSDP、DeepSpeed模型训练代码,并对比不同方法的训练速度以及GPU内存的使用☆127Updated last year
- Paper List for In-context Learning 🌷☆188Updated last year
- 一款便捷的抢占显卡脚本☆381Updated 10 months ago
- 使用单个24G显卡,从0开始训练LLM☆55Updated 4 months ago
- ☆215Updated last week
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆116Updated 2 years ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆651Updated last year
- How to use wandb?☆685Updated 2 years ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆271Updated 9 months ago