BlinkDL / WorldModelLinks
Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business / finance / governance, and can align agents with human too.
☆39Updated 2 years ago
Alternatives and similar repositories for WorldModel
Users that are interested in WorldModel are comparing it to the libraries listed below
Sorting:
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆31Updated 2 years ago
- ☆42Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- RWKV model implementation☆38Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 3 years ago
- RWKV in nanoGPT style☆193Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- ☆39Updated last year
- Token Omission Via Attention☆127Updated last year
- JAX implementations of RWKV☆19Updated 2 years ago
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Code repository for the c-BTM paper☆107Updated 2 years ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- ☆18Updated last year
- ☆50Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆38Updated 8 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated last week
- ☆19Updated 5 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆16Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- Evaluating LLMs with Dynamic Data☆96Updated 3 months ago
- O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, …☆89Updated 2 years ago