hkproj / pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
☆323Updated last year
Alternatives and similar repositories for pytorch-llama:
Users that are interested in pytorch-llama are comparing it to the libraries listed below
- ☆159Updated 4 months ago
- Notes about LLaMA 2 model☆59Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆102Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆65Updated last year
- Notes and commented code for RLHF (PPO)☆90Updated last year
- LoRA and DoRA from Scratch Implementations☆203Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆94Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆425Updated 3 months ago
- ☆181Updated 2 months ago
- Notes on quantization in neural networks☆81Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆273Updated last year
- LLM Workshop by Sourab Mangrulkar☆379Updated 10 months ago
- Explorations into some recent techniques surrounding speculative decoding☆261Updated 4 months ago
- Scalable toolkit for efficient model alignment☆786Updated this week
- Efficient LLM Inference over Long Sequences☆372Updated last week
- A simple and effective LLM pruning approach.☆741Updated 9 months ago
- Attention is all you need implementation☆911Updated 11 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆346Updated 8 months ago
- ☆221Updated 10 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆511Updated 6 months ago
- ☆260Updated 4 months ago
- Fast inference from large lauguage models via speculative decoding☆722Updated 8 months ago
- ☆155Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆812Updated 8 months ago
- Ring attention implementation with flash attention☆759Updated last month
- Coding a Multimodal (Vision) Language Model from scratch in PyTorch with full explanation: https://www.youtube.com/watch?v=vAmKB7iPkWw☆461Updated 5 months ago
- Official PyTorch implementation of QA-LoRA☆132Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆807Updated this week
- Large Context Attention☆709Updated 3 months ago