BlinkDL / WorldModel
Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business / finance / governance, and can align agents with human too.
☆40Updated 2 years ago
Alternatives and similar repositories for WorldModel:
Users that are interested in WorldModel are comparing it to the libraries listed below
- ☆42Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated last year
- RWKV model implementation☆37Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆35Updated this week
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- Implementation of the Mamba SSM with hf_integration.☆56Updated 8 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- ☆40Updated 2 years ago
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated 2 years ago
- tinygrad port of the RWKV large language model.☆44Updated last month
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- RWKV, in easy to read code☆72Updated last month
- Repository for "TESS-2: A Large-Scale, Generalist Diffusion Language Model"☆34Updated 2 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Fast modular code to create and train cutting edge LLMs☆66Updated 11 months ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- BigKnow2022: Bringing Language Models Up to Speed☆15Updated 2 years ago
- RWKV-7: Surpassing GPT☆83Updated 5 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆52Updated 8 months ago
- A converter and basic tester for rwkv onnx☆42Updated last year