The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
☆999Jan 30, 2024Updated 2 years ago
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆382Jun 4, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,167Jan 11, 2024Updated 2 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,185Nov 27, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,901Jun 10, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆704Jan 26, 2026Updated 3 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆458May 13, 2025Updated 11 months ago
- Schedule-Free Optimization in PyTorch☆2,276May 21, 2025Updated 11 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,690Oct 28, 2024Updated last year
- maximal update parametrization (µP)☆1,704Jul 17, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,133Apr 11, 2024Updated 2 years ago
- LOMO: LOw-Memory Optimization☆991Jul 2, 2024Updated last year
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆286Feb 27, 2023Updated 3 years ago
- A PyTorch native platform for training generative AI models☆5,309Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆816Jun 8, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆23,628Updated this week
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 10 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,503Apr 28, 2026Updated last week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,018Aug 21, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,178Updated this week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,225Jul 11, 2024Updated last year
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- D-Adaptation for SGD, Adam and AdaGrad☆532Jan 22, 2025Updated last year
- NanoGPT (124M) in 90 seconds☆5,200Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,442Apr 21, 2026Updated 2 weeks ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,928Mar 14, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,114Jan 23, 2026Updated 3 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆664Jun 1, 2024Updated last year
- Distributed K-FAC preconditioner for PyTorch☆97Apr 30, 2026Updated last week
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,532Jul 16, 2023Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,852Updated this week
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,483May 1, 2025Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,544Jan 19, 2026Updated 3 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆197Apr 3, 2026Updated last month
- ☆63Oct 3, 2024Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,364Jun 13, 2024Updated last year
- ☆34Jan 25, 2024Updated 2 years ago