hkproj / pytorch-llamaLinks
LLaMA 2 implemented from scratch in PyTorch
☆365Updated 2 years ago
Alternatives and similar repositories for pytorch-llama
Users that are interested in pytorch-llama are comparing it to the libraries listed below
Sorting:
- ☆234Updated last year
- Notes about LLaMA 2 model☆72Updated 2 years ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆93Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- making the official triton tutorials actually comprehensible☆104Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.☆233Updated 10 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- Notes and commented code for RLHF (PPO)☆124Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- Awesome list for LLM quantization☆384Updated 3 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆364Updated 3 months ago
- ☆412Updated last year
- Minimal hackable GRPO implementation☆321Updated last year
- ☆230Updated 2 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆190Updated 2 years ago
- Notes on quantization in neural networks☆117Updated 2 years ago
- Llama from scratch, or How to implement a paper without crying☆585Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆356Updated 2 weeks ago
- ☆1,345Updated 11 months ago
- Explorations into some recent techniques surrounding speculative decoding☆299Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆135Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆455Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆810Updated 11 months ago
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆429Updated 4 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆334Updated 2 years ago
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Updated 11 months ago