knotgrass / How-Transformers-WorkLinks
🧠 A study guide to learn about Transformers
☆12Updated last year
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆100Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆161Updated last week
- LLaMA 2 implemented from scratch in PyTorch☆361Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆215Updated 8 months ago
- Llama from scratch, or How to implement a paper without crying☆581Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆271Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆375Updated 5 months ago
- LLM Workshop by Sourab Mangrulkar☆397Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Notes about LLaMA 2 model☆71Updated 2 years ago
- ☆224Updated last week
- GPU Kernels☆209Updated 7 months ago
- Distributed training (multi-node) of a Transformer model☆88Updated last year
- ☆81Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆243Updated last year
- Notes and commented code for RLHF (PPO)☆118Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆196Updated 6 months ago
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆68Updated 8 months ago
- Pre-training code for Amber 7B LLM☆169Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- ☆222Updated 11 months ago
- BERT explained from scratch☆16Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆258Updated 2 years ago
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆89Updated 4 months ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆317Updated 2 years ago
- Best practices for distilling large language models.☆591Updated last year
- PyTorch building blocks for the OLMo ecosystem☆482Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆320Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆313Updated last month