tysam-code / hlb-gpt
Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha).
☆342Updated 8 months ago
Alternatives and similar repositories for hlb-gpt:
Users that are interested in hlb-gpt are comparing it to the libraries listed below
- ☆143Updated 2 years ago
- ☆412Updated last year
- Fast bare-bones BPE for modern tokenizer training☆152Updated 2 weeks ago
- batched loras☆341Updated last year
- a small code base for training large models☆290Updated 4 months ago
- JAX implementation of the Llama 2 model☆218Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- ☆92Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆576Updated 9 months ago
- Puzzles for exploring transformers☆343Updated last year
- git extension for {collaborative, communal, continual} model development☆211Updated 5 months ago
- Language Modeling with the H3 State Space Model☆519Updated last year
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- Simple Transformer in Jax☆136Updated 9 months ago
- ☆215Updated 9 months ago
- Extract full next-token probabilities via language model APIs☆241Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆289Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆566Updated last week
- ☆302Updated 9 months ago
- Helpers and such for working with Lambda Cloud☆51Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,000Updated 8 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 5 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆169Updated this week
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Simple embedding -> text model trained on a small subset of Wikipedia sentences.☆153Updated last year
- A repository for log-time feedforward networks☆221Updated last year
- A puzzle to learn about prompting☆127Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Draw more samples☆189Updated 9 months ago
- Inference code for Persimmon-8B☆415Updated last year