hkproj / pytorch-llamaLinks
LLaMA 2 implemented from scratch in PyTorch
☆358Updated 2 years ago
Alternatives and similar repositories for pytorch-llama
Users that are interested in pytorch-llama are comparing it to the libraries listed below
Sorting:
- ☆216Updated 10 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆117Updated 2 years ago
- Notes about LLaMA 2 model☆69Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆355Updated last week
- Implementation of FlashAttention in PyTorch☆173Updated 10 months ago
- Awesome list for LLM quantization☆340Updated last month
- Notes and commented code for RLHF (PPO)☆114Updated last year
- Distributed training (multi-node) of a Transformer model☆86Updated last year
- making the official triton tutorials actually comprehensible☆61Updated 2 months ago
- ☆393Updated 10 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆160Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆125Updated last year
- Llama from scratch, or How to implement a paper without crying☆580Updated last year
- Notes on quantization in neural networks☆105Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆290Updated 10 months ago
- Official PyTorch implementation of QA-LoRA☆143Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆447Updated 10 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆407Updated last month
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆319Updated 8 months ago
- Efficient LLM Inference over Long Sequences☆390Updated 4 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆346Updated 6 months ago
- Cataloging released Triton kernels.☆265Updated 2 months ago
- ☆225Updated 3 weeks ago
- Fast inference from large lauguage models via speculative decoding☆853Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆968Updated 10 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆106Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated 2 weeks ago
- Tutorial for how to build BERT from scratch☆101Updated last year