hkproj / pytorch-llamaLinks
LLaMA 2 implemented from scratch in PyTorch
☆347Updated last year
Alternatives and similar repositories for pytorch-llama
Users that are interested in pytorch-llama are comparing it to the libraries listed below
Sorting:
- ☆192Updated 7 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆70Updated last year
- Notes about LLaMA 2 model☆66Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆178Updated 5 months ago
- ☆349Updated 8 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆152Updated last year
- Notes and commented code for RLHF (PPO)☆104Updated last year
- ☆211Updated 6 months ago
- Llama from scratch, or How to implement a paper without crying☆578Updated last year
- Awesome list for LLM quantization☆279Updated this week
- Implementation of FlashAttention in PyTorch☆162Updated 7 months ago
- Efficient LLM Inference over Long Sequences☆389Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆348Updated 9 months ago
- making the official triton tutorials actually comprehensible☆53Updated last month
- LoRA and DoRA from Scratch Implementations☆209Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆301Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆79Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need☆339Updated last month
- Cataloging released Triton kernels.☆252Updated 7 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆328Updated 3 months ago
- ☆92Updated 11 months ago
- Notes on quantization in neural networks☆96Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆282Updated 8 months ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆257Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆118Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆64Updated 4 months ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆177Updated last year
- Minimal hackable GRPO implementation☆281Updated 6 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆107Updated 2 years ago