Cerebras / gigaGPT
a small code base for training large models
☆290Updated 4 months ago
Alternatives and similar repositories for gigaGPT:
Users that are interested in gigaGPT are comparing it to the libraries listed below
- Visualize the intermediate output of Mistral 7B☆354Updated 2 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆204Updated 5 months ago
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆342Updated 8 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆636Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆273Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 5 months ago
- ☆412Updated last year
- ☆526Updated 7 months ago
- Mistral7B playing DOOM☆130Updated 9 months ago
- An implementation of bucketMul LLM inference☆216Updated 9 months ago
- run paligemma in real time☆131Updated 11 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆606Updated 3 weeks ago
- A pure NumPy implementation of Mamba.☆222Updated 9 months ago
- A comprehensive deep dive into the world of tokens☆221Updated 9 months ago
- A bagel, with everything.☆319Updated last year
- ☆143Updated 2 years ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- LLM Analytics☆655Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- batched loras☆341Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆281Updated 2 months ago
- A repository for research on medium sized language models.☆493Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆92Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,378Updated last year
- ☆302Updated 9 months ago