coaxsoft / pytorch_bert
Tutorial for how to build BERT from scratch
β92Updated 11 months ago
Alternatives and similar repositories for pytorch_bert:
Users that are interested in pytorch_bert are comparing it to the libraries listed below
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.β244Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorchβ102Updated last year
- π§ A study guide to learn about Transformersβ11Updated last year
- β82Updated last year
- β46Updated 3 years ago
- Implementation of BERT-based Language Modelsβ19Updated last year
- LoRA and DoRA from Scratch Implementationsβ203Updated last year
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generationβ48Updated 3 years ago
- LLaMA 2 implemented from scratch in PyTorchβ323Updated last year
- Notes and commented code for RLHF (PPO)β90Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)β¦β65Updated last year
- A Simplified PyTorch Implementation of Vision Transformer (ViT)β182Updated 10 months ago
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI mβ¦β220Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ328Updated 10 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β92Updated last year
- Code Transformer neural network components piece by pieceβ343Updated 2 years ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.β124Updated last year
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.β85Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creationβ108Updated 7 months ago
- Recurrent Memory Transformerβ149Updated last year
- β159Updated 4 months ago
- A minimum example of aligning language models with RLHF similar to ChatGPTβ217Updated last year
- Fine tune a T5 transformer model using PyTorch & Transformersπ€β212Updated 4 years ago
- A framework for few-shot evaluation of autoregressive language models.β103Updated 2 years ago
- Distributed training (multi-node) of a Transformer modelβ66Updated last year
- I will build Transformer from scratchβ68Updated 11 months ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)β173Updated 2 years ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracyβ129Updated 2 years ago
- Direct Preference Optimization from scratch in PyTorchβ91Updated last month
- This repository contains a custom implementation of the BERT model, fine-tuned for specific tasks, along with an implementation of Low Raβ¦β72Updated last year