alxndrTL / othello_mambaLinks
Evaluating the Mamba architecture on the Othello game
โ49Updated last year
Alternatives and similar repositories for othello_mamba
Users that are interested in othello_mamba are comparing it to the libraries listed below
Sorting:
- Griffin MQA + Hawk Linear RNN Hybridโ88Updated last year
- A MAD laboratory to improve AI architecture designs ๐งชโ136Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ181Updated 6 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.โ54Updated last year
- โ62Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jaxโ91Updated last year
- โ82Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIMโ61Updated last year
- Token Omission Via Attentionโ128Updated last year
- Understand and test language model architectures on synthetic tasks.โ249Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindโ132Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"โ103Updated last year
- โ35Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAXโ92Updated last year
- Mixture of A Million Expertsโ52Updated last year
- โ53Updated last year
- RWKV-7: Surpassing GPTโ103Updated last year
- โ53Updated last year
- Implementation of the Llama architecture with RLHF + Q-learningโ170Updated 11 months ago
- Some preliminary explorations of Mamba's context scaling.โ218Updated last year
- โ35Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated 2 years ago
- โ108Updated 5 months ago
- Explorations into the recently proposed Taylor Series Linear Attentionโ100Updated last year
- Fast modular code to create and train cutting edge LLMsโ68Updated last year
- A State-Space Model with Rational Transfer Function Representation.โ83Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ132Updated last year
- Triton Implementation of HyperAttention Algorithmโ48Updated 2 years ago
- โ167Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsโ121Updated last year