Cerebras / gigaGPTLinks
a small code base for training large models
☆299Updated last month
Alternatives and similar repositories for gigaGPT
Users that are interested in gigaGPT are comparing it to the libraries listed below
Sorting:
- Inference code for Persimmon-8B☆415Updated last year
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆205Updated 6 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆639Updated last week
- Visualize the intermediate output of Mistral 7B☆362Updated 4 months ago
- ☆412Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆711Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆345Updated 10 months ago
- A pure NumPy implementation of Mamba.☆223Updated 10 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆275Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 7 months ago
- The repository for the code of the UltraFastBERT paper☆514Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆365Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆615Updated 2 months ago
- ☆143Updated 2 years ago
- run paligemma in real time☆131Updated last year
- Mistral7B playing DOOM☆131Updated 10 months ago
- ☆536Updated 9 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- Fast bare-bones BPE for modern tokenizer training☆157Updated 2 months ago
- ☆92Updated last year
- ☆864Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 7 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆237Updated 4 months ago
- An implementation of bucketMul LLM inference☆217Updated 11 months ago
- batched loras☆343Updated last year
- A bagel, with everything.☆320Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 weeks ago