coaxsoft / pytorch_bertLinks
Tutorial for how to build BERT from scratch
☆93Updated last year
Alternatives and similar repositories for pytorch_bert
Users that are interested in pytorch_bert are comparing it to the libraries listed below
Sorting:
- ☆83Updated last year
- 🧠A study guide to learn about Transformers☆11Updated last year
- ☆168Updated 5 months ago
- LLaMA 2 implemented from scratch in PyTorch☆329Updated last year
- Implementation of BERT-based Language Models☆19Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆104Updated last year
- Distributed training (multi-node) of a Transformer model☆68Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆250Updated last year
- LoRA and DoRA from Scratch Implementations☆203Updated last year
- ☆47Updated 3 years ago
- Notes and commented code for RLHF (PPO)☆94Updated last year
- Playground for Transformers☆51Updated last year
- A minimum example of aligning language models with RLHF similar to ChatGPT☆218Updated last year
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆187Updated 11 months ago
- An open collection of implementation tips, tricks and resources for training large language models☆473Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆85Updated 2 years ago
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆222Updated last year
- Minimal code to train a Large Language Model (LLM).☆168Updated 2 years ago
- Early solution for Google AI4Code competition☆76Updated 3 years ago
- I will build Transformer from scratch☆69Updated last year
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆227Updated last year
- Efficient Attention for Long Sequence Processing☆93Updated last year
- code for the ddp tutorial☆32Updated 3 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆67Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆288Updated this week
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- Prune transformer layers☆69Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 7 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆122Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated 8 months ago