gordicaleksa / OpenGemini
Effort to open-source 10.5 trillion parameter Gemini model.
☆17Updated last year
Alternatives and similar repositories for OpenGemini:
Users that are interested in OpenGemini are comparing it to the libraries listed below
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆49Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆28Updated last week
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Training hybrid models for dummies.☆20Updated last month
- Exploration into the Firefly algorithm in Pytorch☆35Updated last week
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- ☆16Updated 2 weeks ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated 2 months ago
- Minimum Description Length probing for neural network representations☆18Updated 3 weeks ago
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 4 months ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆36Updated 4 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated last month
- ☆32Updated last month
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆116Updated 4 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆73Updated last week
- Utilities for PyTorch distributed☆23Updated last year
- This is the official repo for Gradient Agreement Filtering (GAF).☆22Updated 3 weeks ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆44Updated last week
- ☆33Updated 5 months ago
- Collection of autoregressive model implementation☆81Updated last week
- Toy genetic algorithm in Pytorch☆33Updated 2 weeks ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- ☆32Updated last year
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆29Updated 4 months ago
- Implementation of Spectral State Space Models☆16Updated 11 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 8 months ago