knotgrass / How-Transformers-WorkLinks
🧠A study guide to learn about Transformers
☆12Updated 2 years ago
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆102Updated last year
- Llama from scratch, or How to implement a paper without crying☆585Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 10 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- Distributed training (multi-node) of a Transformer model☆93Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆116Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆366Updated 2 years ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆282Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- LLM Workshop by Sourab Mangrulkar☆401Updated last year
- a simplified version of Meta's Llama 3 model to be used for learning☆44Updated last year
- Notes about LLaMA 2 model☆72Updated 2 years ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆382Updated 7 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆336Updated 2 years ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆200Updated last year
- ☆232Updated 2 months ago
- ☆100Updated last year
- Notes and commented code for RLHF (PPO)☆124Updated last year
- ☆236Updated last year
- ☆46Updated 8 months ago
- Implementation of BERT-based Language Models☆26Updated last year
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆197Updated last year
- GPU Kernels☆220Updated 9 months ago
- ☆82Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆250Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated 2 years ago