alxndrTL / othello_mambaLinks
Evaluating the Mamba architecture on the Othello game
☆47Updated last year
Alternatives and similar repositories for othello_mamba
Users that are interested in othello_mamba are comparing it to the libraries listed below
Sorting:
- Griffin MQA + Hawk Linear RNN Hybrid☆86Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- ☆52Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆81Updated last year
- Understand and test language model architectures on synthetic tasks.☆197Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆128Updated 3 weeks ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆126Updated 9 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 7 months ago
- Mixture of A Million Experts☆46Updated 10 months ago
- Experiments on the impact of depth in transformers and SSMs.☆30Updated 7 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year
- supporting pytorch FSDP for optimizers☆79Updated 5 months ago
- ☆53Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆88Updated 11 months ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- ☆29Updated 6 months ago
- ☆37Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆100Updated 5 months ago
- Some preliminary explorations of Mamba's context scaling.☆214Updated last year
- ☆46Updated last year
- ☆78Updated 11 months ago
- ☆53Updated 8 months ago
- Fast modular code to create and train cutting edge LLMs☆66Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆80Updated last year
- Accelerated First Order Parallel Associative Scan☆181Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆234Updated 3 months ago
- ☆79Updated 9 months ago
- RWKV, in easy to read code☆72Updated 2 months ago