kotoba-tech / kotomamba
Mamba training library developed by kotoba technologies
☆68Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for kotomamba
- Checkpointable dataset utilities for foundation model training☆32Updated 9 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆12Updated 9 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated 9 months ago
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated 6 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆28Updated last week
- Randomized Positional Encodings Boost Length Generalization of Transformers☆78Updated 8 months ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Japanese LLaMa experiment☆52Updated 8 months ago
- ☆51Updated 5 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆112Updated 2 months ago
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆71Updated last month
- A toolkit for scaling law research ⚖☆43Updated 8 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆108Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆92Updated last month
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆118Updated 3 weeks ago
- ☆77Updated 5 months ago
- ☆62Updated 3 months ago
- Token Omission Via Attention☆120Updated last month
- Here we will test various linear attention designs.☆56Updated 6 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆49Updated last year
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- A large-scale RWKV v6 inference with FLA . Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docker. Suppo…☆16Updated last week
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆10Updated 4 months ago
- ☆71Updated 6 months ago
- ☆38Updated 7 months ago
- Code for Zero-Shot Tokenizer Transfer☆115Updated 3 weeks ago