kotoba-tech / kotomambaLinks
Mamba training library developed by kotoba technologies
☆69Updated last year
Alternatives and similar repositories for kotomamba
Users that are interested in kotomamba are comparing it to the libraries listed below
Sorting:
- Checkpointable dataset utilities for foundation model training☆32Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆88Updated last year
- ☆41Updated last year
- ☆62Updated last year
- ☆16Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆120Updated 3 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated 2 years ago
- Japanese LLaMa experiment☆54Updated last month
- Fast, Modern, and Low Precision PyTorch Optimizers☆120Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- ☆20Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆34Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- Trying out the Mamba architecture on small examples (cifar-10, shakespeare char level etc.)☆47Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Updated last year
- Token Omission Via Attention☆128Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆44Updated 10 months ago
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆310Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆125Updated 2 weeks ago
- CycleQD is a framework for parameter space model merging.☆48Updated 11 months ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆163Updated last week
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated 2 years ago
- A toolkit for scaling law research ⚖☆55Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated 2 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Updated last year