knotgrass / How-Transformers-WorkLinks
🧠A study guide to learn about Transformers
☆12Updated last year
Alternatives and similar repositories for How-Transformers-Work
Users that are interested in How-Transformers-Work are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆99Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆358Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆202Updated 7 months ago
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- Llama from scratch, or How to implement a paper without crying☆579Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆72Updated 2 years ago
- Notes about LLaMA 2 model☆68Updated 2 years ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆160Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆265Updated last year
- ☆224Updated last week
- Distributed training (multi-node) of a Transformer model☆85Updated last year
- Fine-tuning Open-Source LLMs for Adaptive Machine Translation☆87Updated 3 months ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆369Updated 3 months ago
- Prune transformer layers☆69Updated last year
- ☆98Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.☆335Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- ☆209Updated 9 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆273Updated last year
- Easily embed, cluster and semantically label text datasets☆581Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆330Updated 6 months ago
- Official PyTorch implementation of QA-LoRA☆141Updated last year
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆73Updated last week
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago