The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
☆994Jan 30, 2024Updated 2 years ago
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆382Jun 4, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,163Jan 11, 2024Updated 2 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,182Nov 27, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,870Jun 10, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆703Jan 26, 2026Updated 2 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆455May 13, 2025Updated 11 months ago
- Schedule-Free Optimization in PyTorch☆2,274May 21, 2025Updated 10 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,683Oct 28, 2024Updated last year
- maximal update parametrization (µP)☆1,698Jul 17, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated 2 years ago
- LOMO: LOw-Memory Optimization☆990Jul 2, 2024Updated last year
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆285Feb 27, 2023Updated 3 years ago
- A PyTorch native platform for training generative AI models☆5,242Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆813Jun 8, 2025Updated 10 months ago
- Fast and memory-efficient exact attention☆23,344Updated this week
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 9 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,471Mar 30, 2026Updated 2 weeks ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Dec 27, 2024Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,018Aug 21, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,121Updated this week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,211Jul 11, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- D-Adaptation for SGD, Adam and AdaGrad☆531Jan 22, 2025Updated last year
- NanoGPT (124M) in 2 minutes☆5,095Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,417Mar 30, 2026Updated 2 weeks ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters