tysam-code / hlb-gpt
Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha).
☆342Updated 6 months ago
Alternatives and similar repositories for hlb-gpt:
Users that are interested in hlb-gpt are comparing it to the libraries listed below
- Fast bare-bones BPE for modern tokenizer training☆146Updated 3 months ago
- ☆92Updated last year
- ☆143Updated last year
- Puzzles for exploring transformers☆332Updated last year
- Extract full next-token probabilities via language model APIs☆228Updated 11 months ago
- a small code base for training large models☆286Updated 2 months ago
- ☆299Updated 7 months ago
- A puzzle to learn about prompting☆124Updated last year
- ☆208Updated 7 months ago
- JAX implementation of the Llama 2 model☆215Updated last year
- git extension for {collaborative, communal, continual} model development☆207Updated 3 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Helpers and such for working with Lambda Cloud☆51Updated last year
- ☆412Updated last year
- Annotated version of the Mamba paper☆473Updated 11 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆620Updated 7 months ago
- Language Modeling with the H3 State Space Model☆516Updated last year
- Solve puzzles. Learn CUDA.☆62Updated last year
- batched loras☆338Updated last year
- seqax = sequence modeling + JAX☆143Updated 7 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- A repository for log-time feedforward networks☆219Updated 10 months ago
- run paligemma in real time☆130Updated 9 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆56Updated 3 months ago
- Simple Transformer in Jax☆136Updated 7 months ago
- An interactive exploration of Transformer programming.☆258Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆185Updated 8 months ago