NVIDIA-NeMo / Emerging-OptimizersLinks
☆147Updated this week
Alternatives and similar repositories for Emerging-Optimizers
Users that are interested in Emerging-Optimizers are comparing it to the libraries listed below
Sorting:
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMs☆117Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 2 weeks ago
- ☆131Updated 8 months ago
- Muon fsdp 2☆53Updated 6 months ago
- ☆124Updated last year
- supporting pytorch FSDP for optimizers☆84Updated last year
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- Normalized Transformer (nGPT)☆198Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Fast and memory-efficient exact attention☆75Updated 11 months ago
- JAX bindings for Flash Attention v2☆103Updated last week
- ☆105Updated 11 months ago
- 🔥 A minimal training framework for scaling FLA models☆343Updated 2 months ago
- 📄Small Batch Size Training for Language Models☆80Updated 4 months ago
- Accelerated First Order Parallel Associative Scan☆196Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 6 months ago
- Load compute kernels from the Hub☆397Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆247Updated 8 months ago
- ☆270Updated 8 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Updated last year
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆137Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- ☆92Updated last year
- ring-attention experiments☆165Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago