qhliu26 / Dive-into-Big-Model-TrainingLinks
📑 Dive into Big Model Training
☆116Updated 3 years ago
Alternatives and similar repositories for Dive-into-Big-Model-Training
Users that are interested in Dive-into-Big-Model-Training are comparing it to the libraries listed below
Sorting:
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆220Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆123Updated last year
- ☆123Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆180Updated 3 weeks ago
- ☆115Updated last year
- ☆150Updated 2 years ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆213Updated 4 months ago
- ☆89Updated 3 years ago
- ☆157Updated 2 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆113Updated 9 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆149Updated 3 weeks ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆181Updated this week
- Scalable PaLM implementation of PyTorch☆189Updated 3 years ago
- Python pdb for multiple processes☆76Updated 7 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆475Updated 8 months ago
- A minimal implementation of vllm.☆64Updated last year
- ☆133Updated 7 months ago
- A simple calculation for LLM MFU.☆58Updated 4 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆214Updated 10 months ago
- Explorations into some recent techniques surrounding speculative decoding☆295Updated last year
- ☆79Updated 2 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆139Updated 7 months ago
- ☆94Updated 3 years ago
- ☆22Updated 2 years ago
- Training library for Megatron-based models with bi-directional Hugging Face conversion capability☆347Updated this week
- Zero Bubble Pipeline Parallelism☆445Updated 8 months ago