The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
☆983Jan 30, 2024Updated 2 years ago
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below
Sorting:
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Jun 4, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,151Jan 11, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,841Jun 10, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,262May 21, 2025Updated 9 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆695Jan 26, 2026Updated last month
- Foundation Architecture for (M)LLMs☆3,134Apr 11, 2024Updated last year
- LOMO: LOw-Memory Optimization☆988Jul 2, 2024Updated last year
- maximal update parametrization (µP)☆1,685Jul 17, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 9 months ago
- A PyTorch native platform for training generative AI models☆5,111Updated this week
- Fast and memory-efficient exact attention☆22,460Updated this week
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,017Aug 21, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,393Feb 21, 2026Updated 2 weeks ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,019Updated this week
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆808Jun 8, 2025Updated 9 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,196Jul 11, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,933Mar 14, 2024Updated last year
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,481May 1, 2025Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,317Mar 6, 2025Updated last year
- NanoGPT (124M) in 2 minutes☆4,734Feb 27, 2026Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,356Feb 20, 2026Updated 2 weeks ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆822Mar 30, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- D-Adaptation for SGD, Adam and AdaGrad☆529Jan 22, 2025Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,040Jan 23, 2026Updated last month
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,533Jul 16, 2023Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,453Jul 17, 2025Updated 7 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,706Feb 27, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,588Feb 19, 2026Updated 2 weeks ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,363Jun 13, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago