KindXiaoming / grow-crystalsLinks
Getting crystal-like representations with harmonic loss
☆194Updated 5 months ago
Alternatives and similar repositories for grow-crystals
Users that are interested in grow-crystals are comparing it to the libraries listed below
Sorting:
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated last month
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- DeMo: Decoupled Momentum Optimization☆190Updated 9 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆146Updated 3 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- σ-GPT: A New Approach to Autoregressive Models☆67Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆290Updated 3 months ago
- Dion optimizer algorithm☆338Updated last week
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆291Updated last month
- ICLR 2025 - official implementation for "I-Con: A Unifying Framework for Representation Learning"☆111Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- ☆82Updated last year
- Efficient optimizers☆261Updated last month
- A State-Space Model with Rational Transfer Function Representation.☆79Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated this week
- ☆102Updated last month
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆132Updated last week
- NanoGPT-speedrunning for the poor T4 enjoyers☆71Updated 4 months ago
- open source alpha evolve☆67Updated 3 months ago
- ☆210Updated 9 months ago
- ☆307Updated last year
- ☆150Updated last year
- H-Net Dynamic Hierarchical Architecture☆79Updated last month
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆70Updated 3 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 10 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Repository for code used in the xVal paper☆144Updated last year
- Focused on fast experimentation and simplicity☆75Updated 8 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆99Updated 3 months ago
- WIP☆94Updated last year