knotgrass / How-Transformers-WorkLinks
🧠A study guide to learn about Transformers
☆11Updated last year
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆97Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆343Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆151Updated last year
- ☆184Updated 7 months ago
- ☆206Updated 5 months ago
- LLM Workshop by Sourab Mangrulkar☆388Updated last year
- Notes about LLaMA 2 model☆66Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆332Updated 3 weeks ago
- An extension of the nanoGPT repository for training small MOE models.☆164Updated 4 months ago
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆84Updated 3 weeks ago
- Llama from scratch, or How to implement a paper without crying☆573Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆254Updated last year
- GPU Kernels☆191Updated 3 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆69Updated last year
- Distributed training (multi-node) of a Transformer model☆76Updated last year
- 100 days of building GPU kernels!☆477Updated 3 months ago
- ☆43Updated 2 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated 2 months ago
- Prune transformer layers☆69Updated last year
- Best practices for distilling large language models.☆569Updated last year
- LoRA and DoRA from Scratch Implementations☆207Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆383Updated 4 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated 2 years ago
- Notes and commented code for RLHF (PPO)☆101Updated last year
- A project to improve skills of large language models☆501Updated this week
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆230Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆467Updated last year
- Implementation of BERT-based Language Models☆19Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆153Updated 2 weeks ago