aju22 / LLaMA2
This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT) variant. The implementation focuses on the model architecture and the inference process. The code is restructured and heavily commented to facilitate easy understanding of the key parts of the architecture.
☆63Updated last year
Alternatives and similar repositories for LLaMA2:
Users that are interested in LLaMA2 are comparing it to the libraries listed below
- Official PyTorch implementation of QA-LoRA☆129Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆202Updated 2 weeks ago
- Explorations into some recent techniques surrounding speculative decoding☆250Updated 3 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆92Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- ☆125Updated last year
- ☆220Updated 9 months ago
- ☆181Updated 5 months ago
- ☆118Updated last month
- Low-bit optimizers for PyTorch☆125Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆91Updated last year
- ☆145Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆329Updated 4 months ago
- An extension of the nanoGPT repository for training small MOE models.☆106Updated 2 weeks ago
- Easy and Efficient Quantization for Transformers☆195Updated last month
- Code for studying the super weight in LLM☆94Updated 3 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆278Updated 3 weeks ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 8 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆78Updated last year
- ☆194Updated 3 months ago
- ☆253Updated last year
- The official implementation of the EMNLP 2023 paper LLM-FP4☆191Updated last year
- A pipeline for LLM knowledge distillation☆99Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 5 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆180Updated 2 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 10 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆234Updated last month
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆129Updated 8 months ago