coaxsoft / pytorch_bertLinks
Tutorial for how to build BERT from scratch
☆101Updated last year
Alternatives and similar repositories for pytorch_bert
Users that are interested in pytorch_bert are comparing it to the libraries listed below
Sorting:
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆119Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆365Updated 2 years ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆273Updated last year
- ☆82Updated last year
- Implementation of BERT-based Language Models☆25Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆224Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆114Updated last year
- LLM Workshop by Sourab Mangrulkar☆400Updated last year
- Llama from scratch, or How to implement a paper without crying☆583Updated last year
- 🧠 A study guide to learn about Transformers☆12Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆206Updated 2 years ago
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆224Updated 2 years ago
- Early solution for Google AI4Code competition☆76Updated 3 years ago
- Distributed training (multi-node) of a Transformer model☆91Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆491Updated 2 years ago
- Notes and commented code for RLHF (PPO)☆121Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆259Updated 2 years ago
- Implementation of the first paper on word2vec☆246Updated 4 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Research projects built on top of Transformers☆110Updated 10 months ago
- PyTorch Implementation of OpenAI GPT-2☆353Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆382Updated 6 months ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆227Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆546Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆89Updated 3 years ago
- Official repository for ORPO☆469Updated last year
- a simplified version of Meta's Llama 3 model to be used for learning☆43Updated last year