kyegomez / SophiaView external linksLinks
Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.
☆381Jun 4, 2024Updated last year
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below
Sorting:
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Jan 30, 2024Updated 2 years ago
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆40Feb 1, 2024Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,184Nov 27, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,143Jan 11, 2024Updated 2 years ago
- Scaling Data-Constrained Language Models☆340Jun 28, 2025Updated 7 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,019Aug 21, 2024Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Apr 30, 2024Updated last year
- Simple Autogpt with tree of thoughts☆14May 25, 2023Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆724Jan 31, 2025Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Jan 26, 2026Updated 2 weeks ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- The Next Generation Multi-Modality Superintelligence☆70Sep 3, 2024Updated last year
- D-Adaptation for SGD, Adam and AdaGrad☆528Jan 22, 2025Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆124Dec 29, 2025Updated last month
- maximal update parametrization (µP)☆1,676Jul 17, 2024Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆734May 25, 2024Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆614Jul 2, 2024Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,514Aug 13, 2024Updated last year
- LOMO: LOw-Memory Optimization☆987Jul 2, 2024Updated last year
- ☆47Jan 18, 2024Updated 2 years ago
- ☆553Feb 8, 2026Updated last week
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,066Mar 7, 2024Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆190Jan 11, 2026Updated last month
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Jul 1, 2025Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆2,544Dec 11, 2025Updated 2 months ago
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,672Oct 28, 2024Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Jun 30, 2025Updated 7 months ago
- ☆21Jan 23, 2024Updated 2 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,531Jul 16, 2023Updated 2 years ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,284Dec 22, 2025Updated last month
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆452May 13, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week