knotgrass / How-Transformers-WorkLinks
🧠 A study guide to learn about Transformers
☆11Updated last year
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆99Updated last year
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆350Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆158Updated last year
- Llama from scratch, or How to implement a paper without crying☆578Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆71Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆353Updated 2 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- Notes and commented code for RLHF (PPO)☆106Updated last year
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆85Updated 2 months ago
- Distributed training (multi-node) of a Transformer model☆83Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆185Updated 6 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆259Updated last year
- ☆199Updated 8 months ago
- ☆95Updated 11 months ago
- Research projects built on top of Transformers☆82Updated 6 months ago
- Best practices for distilling large language models.☆574Updated last year
- a simplified version of Meta's Llama 3 model to be used for learning☆42Updated last year
- ☆14Updated 5 months ago
- 100 days of building GPU kernels!☆494Updated 4 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated last month
- GPU Kernels☆193Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- ☆217Updated 7 months ago
- Notes about LLaMA 2 model☆68Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- ☆44Updated 3 months ago
- BERT explained from scratch☆14Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated this week