coaxsoft / pytorch_bert
Tutorial for how to build BERT from scratch
☆83Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for pytorch_bert
- LLaMA 2 implemented from scratch in PyTorch☆254Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆82Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆220Updated 7 months ago
- ☆685Updated last month
- Prune transformer layers☆64Updated 5 months ago
- I will build Transformer from scratch☆50Updated 6 months ago
- LoRA and DoRA from Scratch Implementations☆188Updated 8 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆93Updated last month
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆142Updated 5 months ago
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆195Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆86Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- Training and Fine-tuning an llm in Python and PyTorch.☆41Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆460Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆118Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆176Updated last month
- a simplified version of Meta's Llama 3 model to be used for learning☆32Updated 5 months ago
- Distributed training (multi-node) of a Transformer model☆43Updated 7 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆169Updated 2 months ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy☆124Updated last year
- 🧠 A study guide to learn about Transformers☆10Updated 10 months ago
- Official PyTorch implementation of QA-LoRA☆117Updated 8 months ago
- ☆44Updated last week
- Scaling Data-Constrained Language Models☆321Updated last month
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆435Updated 6 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 5 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆236Updated 4 months ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆182Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆177Updated 9 months ago