hkproj / pytorch-llama-notesLinks
Notes about LLaMA 2 model
☆72Updated 2 years ago
Alternatives and similar repositories for pytorch-llama-notes
Users that are interested in pytorch-llama-notes are comparing it to the libraries listed below
Sorting:
- LLaMA 2 implemented from scratch in PyTorch☆366Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- ☆236Updated last year
- Notes and commented code for RLHF (PPO)☆124Updated last year
- Distributed training (multi-node) of a Transformer model☆94Updated last year
- making the official triton tutorials actually comprehensible☆111Updated 5 months ago
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆71Updated 10 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- Awesome list for LLM quantization☆390Updated 4 months ago
- Code release for book "Efficient Training in PyTorch"☆125Updated 10 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆198Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆200Updated last year
- Slides for "Retrieval Augmented Generation" video☆24Updated 2 years ago
- A family of compressed models obtained via pruning and knowledge distillation☆364Updated 3 months ago
- ☆413Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- Notes on quantization in neural networks☆117Updated 2 years ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆147Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆177Updated last year
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated 2 years ago
- ☆166Updated 2 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆336Updated 2 years ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆394Updated 3 months ago
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- Notes on the Mistral AI model☆20Updated 2 years ago
- ☆232Updated 2 months ago
- Notes on Direct Preference Optimization☆24Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆357Updated last week