hkproj / pytorch-llama-notesLinks
Notes about LLaMA 2 model
☆71Updated 2 years ago
Alternatives and similar repositories for pytorch-llama-notes
Users that are interested in pytorch-llama-notes are comparing it to the libraries listed below
Sorting:
- LLaMA 2 implemented from scratch in PyTorch☆361Updated 2 years ago
- Notes and commented code for RLHF (PPO)☆118Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆68Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆88Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆215Updated 8 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- ☆222Updated 11 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆197Updated 10 months ago
- Efficient LLM Inference over Long Sequences☆392Updated 5 months ago
- ☆403Updated 11 months ago
- Implementation of FlashAttention in PyTorch☆175Updated 10 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆117Updated 2 years ago
- BERT explained from scratch☆16Updated 2 years ago
- ☆224Updated last week
- ☆46Updated 7 months ago
- Notes on Direct Preference Optimization☆23Updated last year
- making the official triton tutorials actually comprehensible☆75Updated 3 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- ☆156Updated this week
- AIMO2 2nd place solution☆68Updated 6 months ago
- Slides for "Retrieval Augmented Generation" video☆23Updated 2 years ago
- This repository is the official implementation of "Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE"☆32Updated 2 months ago
- Awesome list for LLM quantization☆365Updated last month
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆132Updated last month
- ☆122Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆329Updated 2 years ago
- A family of compressed models obtained via pruning and knowledge distillation☆359Updated last month
- HuggingFace conversion and training library for Megatron-based models☆250Updated this week
- Code release for book "Efficient Training in PyTorch"☆112Updated 7 months ago
- ☆99Updated last year