hkproj / pytorch-llama-notesLinks
Notes about LLaMA 2 model
☆59Updated last year
Alternatives and similar repositories for pytorch-llama-notes
Users that are interested in pytorch-llama-notes are comparing it to the libraries listed below
Sorting:
- ☆168Updated 5 months ago
- Notes and commented code for RLHF (PPO)☆94Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆328Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆67Updated last year
- Distributed training (multi-node) of a Transformer model☆68Updated last year
- minimal GRPO implementation from scratch☆90Updated 2 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆95Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆53Updated 2 months ago
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated last month
- A family of compressed models obtained via pruning and knowledge distillation☆341Updated 6 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 8 months ago
- Notes on Direct Preference Optimization☆19Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need☆284Updated this week
- Reference implementation of Mistral AI 7B v0.1 model.☆29Updated last year
- Notes on the Mistral AI model☆19Updated last year
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆105Updated 3 weeks ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆104Updated last year
- ☆87Updated 8 months ago
- Efficient LLM Inference over Long Sequences☆376Updated this week
- Awesome list for LLM quantization☆223Updated 5 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆160Updated 11 months ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆166Updated 9 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆152Updated this week
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆173Updated 2 months ago
- ☆39Updated 3 weeks ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆204Updated last year
- 🧠 A study guide to learn about Transformers☆11Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆220Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆163Updated 10 months ago