zaydzuhri / flame
Fork of Flame repo for training of some new stuff in development
☆11Updated this week
Alternatives and similar repositories for flame:
Users that are interested in flame are comparing it to the libraries listed below
- Code for the paper "Function-Space Learning Rates"☆19Updated 2 weeks ago
- ☆21Updated 4 months ago
- JAX Scalify: end-to-end scaled arithmetics☆16Updated 6 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆18Updated 6 months ago
- Implementations of attention with the softpick function, naive and FlashAttention-2☆28Updated this week
- Implementation of Spectral State Space Models☆16Updated last year
- Utilities for PyTorch distributed☆24Updated 2 months ago
- ☆14Updated 5 months ago
- ☆33Updated 7 months ago
- ☆31Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last month
- NeuMeta transforms neural networks by allowing a single model to adapt on the fly to different sizes, generating the right weights when n…☆42Updated 5 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆24Updated 3 months ago
- ☆19Updated last month
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆12Updated last month
- RS-IMLE☆38Updated 5 months ago
- Implementation of a holodeck, written in Pytorch☆17Updated last year
- ☆34Updated 4 months ago
- ☆22Updated 10 months ago
- Efficient Scaling laws and collaborative pretraining.☆16Updated 3 months ago
- Official code for the paper "Attention as a Hypernetwork"☆30Updated 10 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆19Updated last month
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆29Updated last month
- ☆20Updated 2 weeks ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated this week
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆11Updated 11 months ago
- Combining SOAP and MUON☆16Updated 2 months ago
- Source code for the paper "Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning"☆14Updated 3 months ago