knotgrass / How-Transformers-WorkLinks
🧠A study guide to learn about Transformers
☆12Updated last year
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆101Updated last year
- LLM Workshop by Sourab Mangrulkar☆395Updated last year
- Distributed training (multi-node) of a Transformer model☆86Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 5 months ago
- ☆225Updated 3 weeks ago
- ☆216Updated 10 months ago
- ☆99Updated last year
- Llama from scratch, or How to implement a paper without crying☆580Updated last year
- ☆14Updated 7 months ago
- LLaMA 2 implemented from scratch in PyTorch☆358Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆160Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆270Updated last year
- ☆45Updated 5 months ago
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆87Updated 4 months ago
- Notes and commented code for RLHF (PPO)☆114Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆372Updated 4 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆276Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆190Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- Notes on quantization in neural networks☆105Updated last year
- GPU Kernels☆206Updated 6 months ago
- Research projects built on top of Transformers☆96Updated 8 months ago
- Notes about LLaMA 2 model☆69Updated 2 years ago
- Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation☆35Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆224Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆242Updated last year