hkproj / pytorch-llamaLinks
LLaMA 2 implemented from scratch in PyTorch
☆365Updated 2 years ago
Alternatives and similar repositories for pytorch-llama
Users that are interested in pytorch-llama are comparing it to the libraries listed below
Sorting:
- ☆233Updated last year
- Notes about LLaMA 2 model☆71Updated 2 years ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆119Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- making the official triton tutorials actually comprehensible☆93Updated 4 months ago
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- Distributed training (multi-node) of a Transformer model☆91Updated last year
- Notes and commented code for RLHF (PPO)☆121Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆161Updated last month
- ☆405Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆363Updated 2 months ago
- Notes on quantization in neural networks☆114Updated 2 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆130Updated last year
- ☆224Updated last month
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆296Updated last year
- Minimal hackable GRPO implementation☆315Updated 11 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆423Updated 3 months ago
- Cataloging released Triton kernels.☆282Updated 4 months ago
- Llama from scratch, or How to implement a paper without crying☆583Updated last year
- Efficient LLM Inference over Long Sequences☆393Updated 6 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆322Updated 10 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆332Updated 2 years ago
- Tutorial for how to build BERT from scratch☆101Updated last year
- Awesome list for LLM quantization☆378Updated 3 months ago
- Ring attention implementation with flash attention☆963Updated 4 months ago
- Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch☆151Updated 5 months ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆273Updated last year