kotoba-tech / kotomambaLinks
Mamba training library developed by kotoba technologies
☆70Updated last year
Alternatives and similar repositories for kotomamba
Users that are interested in kotomamba are comparing it to the libraries listed below
Sorting:
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- ☆10Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆41Updated 3 months ago
- ☆60Updated 11 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆86Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 8 months ago
- ☆108Updated last year
- Japanese LLaMa experiment☆52Updated 5 months ago
- ☆20Updated last year
- ☆72Updated last year
- ☆13Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- A toolkit for scaling law research ⚖☆49Updated 4 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated last year
- ☆14Updated 8 months ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆106Updated 4 months ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 5 months ago
- Token Omission Via Attention☆126Updated 7 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆33Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Ongoing research training Mixture of Expert models.☆19Updated 8 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆125Updated this week
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆93Updated 10 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆27Updated 2 months ago