hkproj / pytorch-llama-notesLinks
Notes about LLaMA 2 model
☆63Updated last year
Alternatives and similar repositories for pytorch-llama-notes
Users that are interested in pytorch-llama-notes are comparing it to the libraries listed below
Sorting:
- LLaMA 2 implemented from scratch in PyTorch☆337Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆68Updated last year
- Notes and commented code for RLHF (PPO)☆97Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆58Updated 3 months ago
- Distributed training (multi-node) of a Transformer model☆72Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆160Updated 4 months ago
- Awesome list for LLM quantization☆251Updated last month
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆191Updated 6 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆344Updated 8 months ago
- ☆179Updated 6 months ago
- Notes on Direct Preference Optimization☆19Updated last year
- ☆316Updated 6 months ago
- ☆88Updated 9 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆98Updated last year
- Notes on quantization in neural networks☆89Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆318Updated 2 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆108Updated 2 months ago
- minimal GRPO implementation from scratch☆92Updated 4 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆433Updated 5 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆178Updated 3 weeks ago
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- awesome llm plaza: daily tracking all sorts of awesome topics of llm, e.g. llm for coding, robotics, reasoning, multimod etc.☆204Updated last week
- A project to improve skills of large language models☆456Updated this week
- AIMO2 2nd place solution☆59Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need☆327Updated last week
- ☆153Updated 2 years ago
- Minimal hackable GRPO implementation☆252Updated 5 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆45Updated 10 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆293Updated 2 years ago