kotoba-tech / kotomambaLinks
Mamba training library developed by kotoba technologies
☆70Updated last year
Alternatives and similar repositories for kotomamba
Users that are interested in kotomamba are comparing it to the libraries listed below
Sorting:
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆119Updated 2 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆44Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Japanese LLaMa experiment☆54Updated last month
- Token Omission Via Attention☆128Updated last year
- ☆16Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 3 months ago
- Ongoing research training Mixture of Expert models.☆21Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- CycleQD is a framework for parameter space model merging.☆45Updated 10 months ago
- Train, tune, and infer Bamba model☆137Updated 6 months ago
- ☆62Updated last year
- ☆41Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated last year
- ☆76Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated last month
- Code for pre-training BabyLM baseline models.☆16Updated 2 years ago
- ☆121Updated last year
- Easily run PyTorch on multiple GPUs & machines☆54Updated last week
- A toolkit for scaling law research ⚖☆53Updated 10 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated 2 years ago
- ☆20Updated last year
- Here we will test various linear attention designs.☆62Updated last year
- 0️⃣1️⃣🤗 BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" i…☆310Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Code repository for the c-BTM paper☆108Updated 2 years ago