facebookresearch / schedule_free
Schedule-Free Optimization in PyTorch
☆1,898Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for schedule_free
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,339Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,199Updated this week
- Tile primitives for speedy kernels☆1,658Updated this week
- NanoGPT (124M) quality in 7.8 8xH100-minutes☆1,033Updated this week
- Annotated version of the Mamba paper☆457Updated 8 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,679Updated this week
- UNet diffusion model in pure CUDA☆584Updated 4 months ago
- A native PyTorch Library for large model training☆2,623Updated this week
- TensorDict is a pytorch dedicated tensor container.☆840Updated this week
- Puzzles for learning Triton☆1,135Updated this week
- For optimization algorithm research and development.☆449Updated this week
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,041Updated 5 months ago
- Tensors, for human consumption☆1,113Updated 3 weeks ago
- Open weights language model from Google DeepMind, based on Griffin.☆607Updated 4 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,339Updated 2 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆803Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆1,585Updated this week
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆515Updated 4 months ago
- nanoGPT style version of Llama 3.1☆1,246Updated 3 months ago
- 4M: Massively Multimodal Masked Modeling☆1,607Updated last month
- The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling☆703Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆469Updated 3 weeks ago
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,223Updated last year
- A minimal PyTorch implementation of probabilistic diffusion models for 2D datasets.☆663Updated 6 months ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,624Updated 8 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆252Updated 5 months ago
- Type annotations and runtime checking for shape and dtype of JAX/NumPy/PyTorch/etc. arrays. https://docs.kidger.site/jaxtyping/☆1,219Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆571Updated last week
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆939Updated 9 months ago