KindXiaoming / grow-crystalsLinks
Getting crystal-like representations with harmonic loss
☆195Updated 10 months ago
Alternatives and similar repositories for grow-crystals
Users that are interested in grow-crystals are comparing it to the libraries listed below
Sorting:
- Jax Codebase for Evolutionary Strategies at the Hyperscale☆216Updated last month
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Updated 8 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆349Updated 2 months ago
- ☆82Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆98Updated 6 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆144Updated 2 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 4 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- σ-GPT: A New Approach to Autoregressive Models☆70Updated last year
- Repository for code used in the xVal paper☆148Updated last year
- ☆109Updated 6 months ago
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 8 months ago
- Efficient optimizers☆281Updated last month
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆168Updated 2 weeks ago
- ☆246Updated last year
- A State-Space Model with Rational Transfer Function Representation.☆83Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 10 months ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- ☆314Updated last year
- ☆215Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- Focused on fast experimentation and simplicity☆80Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- ☆214Updated 3 weeks ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆122Updated last year