hkproj / pytorch-llamaLinks
LLaMA 2 implemented from scratch in PyTorch
☆363Updated 2 years ago
Alternatives and similar repositories for pytorch-llama
Users that are interested in pytorch-llama are comparing it to the libraries listed below
Sorting:
- ☆228Updated 11 months ago
- Notes about LLaMA 2 model☆71Updated 2 years ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆118Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- Implementation of FlashAttention in PyTorch☆178Updated 11 months ago
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- Notes and commented code for RLHF (PPO)☆120Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆161Updated last month
- ☆403Updated last year
- making the official triton tutorials actually comprehensible☆80Updated 4 months ago
- ☆225Updated last month
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆189Updated 2 years ago
- A family of compressed models obtained via pruning and knowledge distillation☆361Updated last month
- Llama from scratch, or How to implement a paper without crying☆581Updated last year
- Notes on quantization in neural networks☆114Updated 2 years ago
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year
- ☆178Updated last year
- Cataloging released Triton kernels.☆278Updated 3 months ago
- Tutorial for how to build BERT from scratch☆100Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆195Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆351Updated 7 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆333Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆214Updated last year
- Minimal hackable GRPO implementation☆306Updated 10 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆186Updated last year
- Ring attention implementation with flash attention☆949Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆419Updated 3 months ago
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆69Updated 9 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆444Updated 9 months ago