WangHuiNEU / Transformer_Knowlegde
从底层机理了解Transformer
☆27Updated 2 years ago
Alternatives and similar repositories for Transformer_Knowlegde:
Users that are interested in Transformer_Knowlegde are comparing it to the libraries listed below
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- A light-weight script for maintaining a LOT of machine learning experiments.☆91Updated 2 years ago
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆151Updated 4 months ago
- ☆179Updated 4 months ago
- The official repo of INF-34B models trained by INF Technology.☆34Updated 6 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆150Updated last year
- Rectified Rotary Position Embeddings☆351Updated 9 months ago
- The Roadmap for LLMs☆85Updated last year
- The pure and clear PyTorch Distributed Training Framework.☆276Updated last year
- 一款便捷的抢占显卡脚本☆301Updated last month
- A Tight-fisted Optimizer☆47Updated last year
- Code for a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models☆58Updated this week
- adds Sequence Parallelism into LLaMA-Factory☆155Updated this week
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆119Updated 11 months ago
- ☆52Updated last year
- Yet another PyTorch Trainer and some core components for deep learning.☆212Updated 9 months ago
- RoFormer V1 & V2 pytorch☆485Updated 2 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆358Updated last year
- FLASHQuad_pytorch☆67Updated 2 years ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆277Updated last year
- Paper List for In-context Learning 🌷☆176Updated last year
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆177Updated last year
- Lion and Adam optimization comparison☆57Updated last year
- Must-read papers on improving efficiency for pre-trained language models.☆102Updated 2 years ago
- ☆187Updated last year
- A paper list of pre-trained language models (PLMs).☆80Updated 3 years ago
- dpo算法实现☆30Updated 8 months ago
- The related works and background techniques about Openai o1☆211Updated last month
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.☆186Updated 2 years ago