hkproj / bert-from-scratchLinks
BERT explained from scratch
☆16Updated 2 years ago
Alternatives and similar repositories for bert-from-scratch
Users that are interested in bert-from-scratch are comparing it to the libraries listed below
Sorting:
- ☆100Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆115Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- ☆46Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆92Updated last year
- Notes about LLaMA 2 model☆71Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆231Updated 10 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆334Updated 2 years ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- Building GPT ...☆18Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆365Updated 2 years ago
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆71Updated 10 months ago
- Notes on Direct Preference Optimization☆24Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆117Updated 8 months ago
- Mixed precision training from scratch with Tensors and CUDA☆28Updated last year
- Notes on quantization in neural networks☆117Updated 2 years ago
- LLM Workshop by Sourab Mangrulkar☆401Updated last year
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆141Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆260Updated 2 years ago
- Prune transformer layers☆74Updated last year
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated 2 years ago
- Triton implementation of GPT/LLAMA☆21Updated last year
- ☆85Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆355Updated last week
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- ☆190Updated 2 years ago
- Code for studying the super weight in LLM☆120Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆197Updated last year