OpenMOSE / RWKV-Infer
A large-scale RWKV v6 inference with FLA . Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docker. Supports true multi-batch generation and dynamic State switching. CUDA and Rocm Supported :)
☆15Updated 2 weeks ago
Related projects: ⓘ
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆34Updated 10 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆22Updated 3 months ago
- Here we will test various linear attention designs.☆55Updated 4 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆27Updated last month
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆34Updated 9 months ago
- RWKV model implementation☆38Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆20Updated 2 months ago
- GoldFinch and other hybrid transformer components☆38Updated 2 months ago
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluating☆26Updated last week
- RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like …☆11Updated 5 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆13Updated last year
- Official Repository for Efficient Linear-Time Attention Transformers.☆17Updated 3 months ago
- ☆30Updated 8 months ago
- Awesome Triton Resources☆16Updated 3 weeks ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆31Updated 3 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆15Updated this week
- ☆18Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆24Updated 5 months ago
- ☆22Updated 3 months ago
- Efficient PScan implementation in PyTorch☆15Updated 8 months ago
- Mamba training library developed by kotoba technologies☆63Updated 7 months ago
- ☆50Updated last month
- ☆42Updated 7 months ago
- ☆19Updated last month
- A repository for research on medium sized language models.☆71Updated 3 months ago
- Checkpointable dataset utilities for foundation model training☆31Updated 7 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆63Updated last month
- Linear Attention Sequence Parallelism (LASP)☆64Updated 3 months ago
- ☆41Updated 2 months ago