hkproj / pytorch-transformer-distributed
Distributed training (multi-node) of a Transformer model
☆64Updated last year
Alternatives and similar repositories for pytorch-transformer-distributed:
Users that are interested in pytorch-transformer-distributed are comparing it to the libraries listed below
- Notes on Direct Preference Optimization☆19Updated last year
- minimal GRPO implementation from scratch☆85Updated last month
- ☆155Updated 3 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 11 months ago
- Notes about LLaMA 2 model☆59Updated last year
- Notes and commented code for RLHF (PPO)☆86Updated last year
- Notes on quantization in neural networks☆79Updated last year
- LoRA and DoRA from Scratch Implementations☆202Updated last year
- Prune transformer layers☆68Updated 10 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 7 months ago
- ☆153Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆213Updated 11 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆64Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆28Updated 2 months ago
- Building GPT ...☆17Updated 4 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆101Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆167Updated 3 weeks ago
- LLaMA 2 implemented from scratch in PyTorch☆322Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆91Updated 4 months ago
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated last year
- GPU Kernels☆160Updated 2 weeks ago
- Direct Preference Optimization Implementation☆16Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆286Updated last week
- ☆169Updated 2 months ago
- An extension of the nanoGPT repository for training small MOE models.☆131Updated last month
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆106Updated 6 months ago
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year