gordicaleksa / OpenGeminiLinks
Effort to open-source 10.5 trillion parameter Gemini model.
☆17Updated 2 years ago
Alternatives and similar repositories for OpenGemini
Users that are interested in OpenGemini are comparing it to the libraries listed below
Sorting:
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆81Updated last month
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆124Updated this week
- ☆16Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 7 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last month
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 2 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Collection of autoregressive model implementation☆85Updated this week
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated 3 weeks ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- ☆82Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 2 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Visualising Losses in Deep Neural Networks☆16Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆126Updated last year
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆96Updated 10 months ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 4 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 7 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 9 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- Training hybrid models for dummies.☆29Updated 2 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆70Updated last month
- An implementation of the Llama architecture, to instruct and delight☆21Updated 7 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆105Updated last year
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 7 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago