hkproj / pytorch-llama-notesLinks
Notes about LLaMA 2 model
☆61Updated last year
Alternatives and similar repositories for pytorch-llama-notes
Users that are interested in pytorch-llama-notes are comparing it to the libraries listed below
Sorting:
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆68Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆335Updated last year
- ☆174Updated 5 months ago
- Notes and commented code for RLHF (PPO)☆96Updated last year
- Distributed training (multi-node) of a Transformer model☆71Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need☆310Updated this week
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆189Updated 5 months ago
- Slides for "Retrieval Augmented Generation" video☆20Updated last year
- Notes on the Mistral AI model☆19Updated last year
- ☆78Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆311Updated last month
- Notes on Direct Preference Optimization☆19Updated last year
- LoRA and DoRA from Scratch Implementations☆204Updated last year
- Awesome list for LLM quantization☆238Updated 2 weeks ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆109Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆55Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆158Updated 2 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆221Updated 3 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆98Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆176Updated this week
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆303Updated 5 months ago
- Reference implementation of Mistral AI 7B v0.1 model.☆29Updated last year
- Efficient LLM Inference over Long Sequences☆378Updated 2 weeks ago
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆372Updated this week
- Tina: Tiny Reasoning Models via LoRA☆260Updated 3 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆134Updated 3 weeks ago
- ☆256Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆343Updated 7 months ago
- Obsolete version of CUDA-mode repo -- use cuda-mode/lectures instead☆25Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆282Updated 2 months ago