tysam-code / hlb-gptLinks
Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha).
☆355Updated last year
Alternatives and similar repositories for hlb-gpt
Users that are interested in hlb-gpt are comparing it to the libraries listed below
Sorting:
- Puzzles for exploring transformers☆382Updated 2 years ago
- ☆416Updated 2 years ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆612Updated last year
- Language Modeling with the H3 State Space Model☆521Updated 2 years ago
- Fast bare-bones BPE for modern tokenizer training☆174Updated 6 months ago
- ☆94Updated 2 years ago
- Simple Transformer in Jax☆140Updated last year
- Helpers and such for working with Lambda Cloud☆51Updated 2 years ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated 2 years ago
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- a small code base for training large models☆318Updated 8 months ago
- batched loras☆348Updated 2 years ago
- ☆144Updated 2 years ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- ☆314Updated last year
- Extract full next-token probabilities via language model APIs☆248Updated last year
- An interactive exploration of Transformer programming.☆271Updated 2 years ago
- JAX implementation of the Llama 2 model☆215Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆723Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- ☆287Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- A repository for log-time feedforward networks☆224Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- Annotated version of the Mamba paper☆494Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆183Updated 2 months ago
- Solve puzzles. Learn CUDA.☆63Updated 2 years ago