coaxsoft / pytorch_bertLinks
Tutorial for how to build BERT from scratch
☆99Updated last year
Alternatives and similar repositories for pytorch_bert
Users that are interested in pytorch_bert are comparing it to the libraries listed below
Sorting:
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆117Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆358Updated 2 years ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆265Updated last year
- 🧠 A study guide to learn about Transformers☆12Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆223Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆222Updated 2 years ago
- Llama from scratch, or How to implement a paper without crying☆579Updated last year
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- ☆82Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆72Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆482Updated 2 years ago
- Prune transformer layers☆69Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆160Updated last year
- a simplified version of Meta's Llama 3 model to be used for learning☆43Updated last year
- Distributed training (multi-node) of a Transformer model☆85Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 10 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆126Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆87Updated 2 years ago
- PyTorch Implementation of OpenAI GPT-2☆345Updated last year
- Implementation of the first paper on word2vec☆245Updated 3 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆204Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆466Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆202Updated 7 months ago
- Implements Low-Rank Adaptation(LoRA) Finetuning from scratch☆81Updated 2 years ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆318Updated 2 years ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆216Updated last year
- ☆209Updated 9 months ago