Cerebras / gigaGPTLinks
a small code base for training large models
☆309Updated 3 months ago
Alternatives and similar repositories for gigaGPT
Users that are interested in gigaGPT are comparing it to the libraries listed below
Sorting:
- Open weights language model from Google DeepMind, based on Griffin.☆647Updated 2 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Visualize the intermediate output of Mistral 7B☆368Updated 7 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆210Updated 9 months ago
- run paligemma in real time☆131Updated last year
- A pure NumPy implementation of Mamba.☆224Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- ☆416Updated last year
- ☆864Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 10 months ago
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- ☆447Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆599Updated last year
- An implementation of bucketMul LLM inference☆222Updated last year
- ☆93Updated last year
- A bagel, with everything.☆324Updated last year
- ☆560Updated last year
- ☆307Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆622Updated 5 months ago
- Mistral7B playing DOOM☆135Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- ☆144Updated 2 years ago
- a curated list of data for reasoning ai☆137Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆719Updated last year
- batched loras☆345Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Fast parallel LLM inference for MLX☆206Updated last year