viai957 / llama-inferenceLinks
A simple implementation of Llama 1, 2. Llama Architecture built from scratch using PyTorch all the models are built from scratch that includes GQA (Grouped Query Attention) , RoPE (Rotary Positional Embeddings) , RMS Norm, FeedForward Block, Encoder (as this is only for Inferencing the model) , SwiGLU (Activation Function),
☆13Updated last year
Alternatives and similar repositories for llama-inference
Users that are interested in llama-inference are comparing it to the libraries listed below
Sorting:
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆343Updated 4 months ago
- Step by step explanation/tutorial of llama2.c☆223Updated last year
- 1-Click is all you need.☆62Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- showing various ways to serve Keras based stable diffusion☆111Updated 2 years ago
- ☆50Updated 10 months ago
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆185Updated 6 months ago
- Google TPU optimizations for transformers models☆120Updated 7 months ago
- Learn CUDA with PyTorch☆74Updated last week
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆70Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆116Updated 2 years ago
- Simple Adaptation of BitNet☆32Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- GPT2 fine-tuning pipeline with KerasNLP, TensorFlow, and TensorFlow Extended☆33Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 5 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆200Updated last year
- Notes on quantization in neural networks☆98Updated last year
- ☆216Updated 7 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆158Updated last year
- Slide decks, coding exercises, and quick references for learning the JAX AI Stack☆33Updated 2 weeks ago
- Various transformers for FSDP research☆38Updated 2 years ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆184Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- ☆46Updated 5 months ago
- Reinforcement Learning example in Nim, playing tic tac toe. Based off original C version from the great Antirez☆14Updated 5 months ago