BlinkDL / nanoRWKV
RWKV in nanoGPT style
☆170Updated 3 months ago
Related projects: ⓘ
- Inference of Mamba models in pure C☆176Updated 6 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆131Updated last month
- RWKV, in easy to read code☆52Updated 5 months ago
- Fast modular code to create and train cutting edge LLMs☆63Updated 4 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆258Updated 10 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated 2 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆115Updated 5 months ago
- Token Omission Via Attention☆118Updated 7 months ago
- Evaluating LLMs with Dynamic Data☆66Updated 2 weeks ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆185Updated 3 weeks ago
- Python bindings for ggml☆125Updated 2 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆206Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (Official Code)☆118Updated 2 weeks ago
- Multipack distributed sampler for fast padding-free training of LLMs☆170Updated last month
- Experiments on speculative sampling with Llama models☆114Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆306Updated 3 months ago
- scalable and robust tree-based speculative decoding algorithm☆300Updated last month
- ☆174Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆105Updated 3 weeks ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆158Updated 2 months ago
- A pipeline for LLM knowledge distillation☆68Updated last month
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆195Updated 3 months ago
- Some preliminary explorations of Mamba's context scaling.☆184Updated 7 months ago
- Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆130Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆285Updated this week
- Griffin MQA + Hawk Linear RNN Hybrid☆82Updated 4 months ago
- PB-LLM: Partially Binarized Large Language Models☆143Updated 9 months ago
- ☆169Updated this week
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆57Updated 4 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆271Updated 4 months ago