Liuhong99 / SophiaView external linksLinks
The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
☆981Jan 30, 2024Updated 2 years ago
Alternatives and similar repositories for Sophia
Users that are interested in Sophia are comparing it to the libraries listed below
Sorting:
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Jun 4, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,148Jan 11, 2024Updated 2 years ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,184Nov 27, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,837Jun 10, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,256May 21, 2025Updated 8 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Jan 26, 2026Updated 3 weeks ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,672Oct 28, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- LOMO: LOw-Memory Optimization☆987Jul 2, 2024Updated last year
- maximal update parametrization (µP)☆1,676Jul 17, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆452May 13, 2025Updated 9 months ago
- A PyTorch native platform for training generative AI models☆5,069Updated this week
- Fast and memory-efficient exact attention☆22,231Updated this week
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,019Aug 21, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Scaling Data-Constrained Language Models☆340Jun 28, 2025Updated 7 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆807Jun 8, 2025Updated 8 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- CodeTF: One-stop Transformer Library for State-of-the-art Code LLM☆1,481May 1, 2025Updated 9 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- NanoGPT (124M) in 2 minutes☆4,624Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,336Feb 5, 2026Updated last week
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆823Mar 30, 2024Updated last year
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,021Jan 23, 2026Updated 3 weeks ago
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,531Jul 16, 2023Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,704Jan 12, 2026Updated last month
- D-Adaptation for SGD, Adam and AdaGrad☆528Jan 22, 2025Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Jun 13, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 6 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆665Jun 1, 2024Updated last year