Cerebras / gigaGPTLinks
a small code base for training large models
☆318Updated 8 months ago
Alternatives and similar repositories for gigaGPT
Users that are interested in gigaGPT are comparing it to the libraries listed below
Sorting:
- Visualize the intermediate output of Mistral 7B☆382Updated 11 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆660Updated 7 months ago
- Inference code for Persimmon-8B☆412Updated 2 years ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆218Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆355Updated last year
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- Fast bare-bones BPE for modern tokenizer training☆175Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Updated last year
- ☆416Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- An implementation of bucketMul LLM inference☆223Updated last year
- ☆94Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆227Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Updated 2 years ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆612Updated last year
- run paligemma in real time☆133Updated last year
- Long context evaluation for large language models☆225Updated 10 months ago
- ☆866Updated 2 years ago
- ☆314Updated last year
- Mistral7B playing DOOM☆138Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆345Updated last year
- ☆446Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆627Updated 9 months ago
- A bagel, with everything.☆325Updated last year
- batched loras☆348Updated 2 years ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 8 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 11 months ago