mli / transformers-benchmarksLinks
real Transformer TeraFLOPS on various GPUs
☆906Updated last year
Alternatives and similar repositories for transformers-benchmarks
Users that are interested in transformers-benchmarks are comparing it to the libraries listed below
Sorting:
- Several simple examples for popular neural network toolkits calling custom CUDA operators.☆1,482Updated 4 years ago
- SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.☆1,083Updated 6 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 weeks ago
- A fast MoE impl for PyTorch☆1,757Updated 5 months ago
- Rotary Transformer☆974Updated 3 years ago
- pytorch distribute tutorials☆141Updated 3 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆385Updated this week
- ☆610Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,099Updated 3 months ago
- How to use wandb?☆660Updated last year
- an implementation of transformer, bert, gpt, and diffusion models for learning purposes☆155Updated 8 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆599Updated last month
- PyTorch Project Specification.☆679Updated 3 years ago
- Best practice for training LLaMA models in Megatron-LM☆657Updated last year
- Cool Papers - Immersive Paper Discovery☆571Updated last month
- 整理 pytorch 单机多 GPU 训练方法与原理☆841Updated 3 years ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,182Updated this week
- A quickstart and benchmark for pytorch distributed training.☆1,669Updated 11 months ago
- 更纯粹、更高压缩率的Tokenizer☆480Updated 7 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,401Updated last year
- huggingface mirror download☆581Updated 3 months ago
- Train a 1B LLM with 1T tokens from scratch by personal☆688Updated 2 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,075Updated 11 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆851Updated last week
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,030Updated 9 months ago
- personal chatgpt☆373Updated 6 months ago
- 从底层机理了解Transformer☆27Updated 3 years ago
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆328Updated 11 months ago
- Inference code for LLaMA models☆121Updated last year
- A pupil in the computer world.(Felix Fu)☆238Updated last year