BlinkDL / WorldModelLinks
Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business / finance / governance, and can align agents with human too.
☆40Updated 2 years ago
Alternatives and similar repositories for WorldModel
Users that are interested in WorldModel are comparing it to the libraries listed below
Sorting:
- ☆42Updated 2 years ago
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆38Updated 3 weeks ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆35Updated 4 months ago
- ☆82Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 9 months ago
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance☆41Updated 2 years ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- This project is established for real-time training of the RWKV model.☆49Updated last year
- RWKV model implementation☆38Updated last year
- RWKV, in easy to read code☆72Updated 2 months ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆93Updated last month
- RWKV in nanoGPT style☆191Updated last year
- Here we will test various linear attention designs.☆59Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 10 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- RWKV-7 mini☆11Updated 2 months ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Updated 7 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆15Updated 2 years ago