kotoba-tech / kotomambaLinks
Mamba training library developed by kotoba technologies
☆71Updated last year
Alternatives and similar repositories for kotomamba
Users that are interested in kotomamba are comparing it to the libraries listed below
Sorting:
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- ☆60Updated last year
- ☆10Updated last year
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 3 months ago
- ☆22Updated last year
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆28Updated 2 months ago
- ☆41Updated last year
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 weeks ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆106Updated 4 months ago
- ☆16Updated 9 months ago
- Japanese LLaMa experiment☆53Updated 6 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆87Updated last year
- ☆15Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 8 months ago
- ☆14Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆81Updated last year
- Ongoing research training Mixture of Expert models.☆18Updated 9 months ago
- ☆79Updated 10 months ago
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆17Updated 2 months ago
- ☆42Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- ☆81Updated last year
- Code for pre-training BabyLM baseline models.☆15Updated 2 years ago
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆122Updated this week
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- A toolkit for scaling law research ⚖☆49Updated 4 months ago
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago