lucidrains / lion-pytorchLinks
π¦ Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch
β2,153Updated 8 months ago
Alternatives and similar repositories for lion-pytorch
Users that are interested in lion-pytorch are comparing it to the libraries listed below
Sorting:
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Modelsβ797Updated 2 months ago
- Foundation Architecture for (M)LLMsβ3,099Updated last year
- maximal update parametrization (Β΅P)β1,584Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ736Updated 3 weeks ago
- The official implementation of βSophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-trainingββ967Updated last year
- D-Adaptation for SGD, Adam and AdaGradβ523Updated 7 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β3,078Updated 3 months ago
- Schedule-Free Optimization in PyTorchβ2,202Updated 3 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence lengthβ794Updated last year
- torchview: visualize pytorch modelsβ977Updated 3 months ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch modelβ601Updated 8 months ago
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.β2,766Updated 2 months ago
- Machine learning metrics for distributed, scalable PyTorch applications.β2,332Updated this week
- β783Updated 2 months ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"β1,202Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.β1,643Updated last week
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022β1,140Updated last year
- Structured state space sequence modelsβ2,706Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.β1,083Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papersβ5,531Updated this week
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorchβ1,165Updated 2 years ago
- Pytorch library for fast transformer implementationsβ1,727Updated 2 years ago
- SAM: Sharpness-Aware Minimization (PyTorch)β1,911Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.β1,345Updated last year
- Code release for ConvNeXt V2 modelβ1,813Updated last year
- Vector (and Scalar) Quantization, in Pytorchβ3,492Updated this week
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π26 knowledge distillation methods pβ¦β1,541Updated 3 weeks ago
- FFCV: Fast Forward Computer Vision (and other ML workloads!)β2,958Updated last year
- Tensors, for human consumptionβ1,275Updated 2 months ago
- Implementation of 𦩠Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorchβ1,256Updated 2 years ago