hkproj / pytorch-transformer-distributedLinks
Distributed training (multi-node) of a Transformer model
☆91Updated last year
Alternatives and similar repositories for pytorch-transformer-distributed
Users that are interested in pytorch-transformer-distributed are comparing it to the libraries listed below
Sorting:
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- Notes on Direct Preference Optimization☆23Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆365Updated 2 years ago
- ☆233Updated last year
- ☆224Updated last month
- ☆45Updated 7 months ago
- Prune transformer layers☆74Updated last year
- GPU Kernels☆218Updated 8 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 10 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆119Updated 2 years ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆140Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- ☆100Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆114Updated last year
- Notes and commented code for RLHF (PPO)☆121Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆140Updated 8 months ago
- Notes on quantization in neural networks☆114Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Notes about LLaMA 2 model☆71Updated 2 years ago
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆70Updated 9 months ago
- ☆178Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆352Updated 8 months ago
- making the official triton tutorials actually comprehensible☆93Updated 4 months ago
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆199Updated this week
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆187Updated 2 months ago