Cerebras / gigaGPT
a small code base for training large models
☆286Updated 2 months ago
Alternatives and similar repositories for gigaGPT:
Users that are interested in gigaGPT are comparing it to the libraries listed below
- Open weights language model from Google DeepMind, based on Griffin.☆620Updated 7 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆200Updated 3 months ago
- Visualize the intermediate output of Mistral 7B☆339Updated 3 weeks ago
- Inference code for Persimmon-8B☆416Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆265Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 3 months ago
- ☆412Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆604Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- Mistral7B playing DOOM☆127Updated 7 months ago
- An implementation of bucketMul LLM inference☆215Updated 7 months ago
- ☆524Updated 3 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆342Updated 6 months ago
- Fast bare-bones BPE for modern tokenizer training☆146Updated 4 months ago
- The repository for the code of the UltraFastBERT paper☆517Updated 10 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆169Updated 9 months ago
- ☆143Updated last year
- Long context evaluation for large language models☆200Updated last week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆277Updated last week
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆250Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆191Updated 7 months ago
- JAX implementation of the Llama 2 model☆215Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆361Updated 11 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆748Updated this week
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- run paligemma in real time☆130Updated 9 months ago
- A Jax-based library for designing and training transformer models from scratch.☆281Updated 5 months ago