The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
☆988Jan 30, 2024Updated 2 years ago
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆382Jun 4, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,157Jan 11, 2024Updated 2 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,183Nov 27, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,861Jun 10, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆701Jan 26, 2026Updated 2 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 10 months ago
- Schedule-Free Optimization in PyTorch☆2,265May 21, 2025Updated 10 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,681Oct 28, 2024Updated last year
- maximal update parametrization (µP)☆1,692Jul 17, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,134Apr 11, 2024Updated last year
- LOMO: LOw-Memory Optimization☆989Jul 2, 2024Updated last year
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆284Feb 27, 2023Updated 3 years ago
- A PyTorch native platform for training generative AI models☆5,191Updated this week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆811Jun 8, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆22,938Updated this week
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 9 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,431Mar 5, 2026Updated 3 weeks ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,017Aug 21, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,209Jul 11, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- NanoGPT (124M) in 2 minutes☆5,003Mar 17, 2026Updated last week
- D-Adaptation for SGD, Adam and AdaGrad☆530Jan 22, 2025Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,388Mar 18, 2026Updated last week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,059Jan 23, 2026Updated 2 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆666Jun 1, 2024Updated last year
- Distributed K-FAC preconditioner for PyTorch☆95Updated this week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,538Jul 16, 2023Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,324Mar 6, 2025Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,806Updated this week
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,479May 1, 2025Updated 10 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆192Mar 22, 2026Updated last week
- ☆63Oct 3, 2024Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,428Jan 19, 2026Updated 2 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,360Jun 13, 2024Updated last year
- ☆34Jan 25, 2024Updated 2 years ago