BBuf / RWKV-World-HF-TokenizerLinks
☆34Updated 11 months ago
Alternatives and similar repositories for RWKV-World-HF-Tokenizer
Users that are interested in RWKV-World-HF-Tokenizer are comparing it to the libraries listed below
Sorting:
- A fast RWKV Tokenizer written in Rust☆46Updated this week
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 11 months ago
- ☆35Updated last year
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 9 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 months ago
- ☆59Updated 3 months ago
- DPO, but faster 🚀☆43Updated 7 months ago
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆38Updated last week
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22Updated last year
- GoldFinch and other hybrid transformer components☆46Updated 11 months ago
- ☆37Updated 2 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆20Updated last month
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆41Updated 8 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- ☆49Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 2 weeks ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 5 months ago
- Fast modular code to create and train cutting edge LLMs☆67Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- QuIP quantization☆54Updated last year
- ☆17Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated 7 months ago
- Unofficial Implementation of Evolutionary Model Merging☆39Updated last year
- PyTorch implementation of Titans.☆24Updated 5 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last month