BlinkDL / RWKV-v2-RNN-Pile
RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
☆66Updated 2 years ago
Alternatives and similar repositories for RWKV-v2-RNN-Pile:
Users that are interested in RWKV-v2-RNN-Pile are comparing it to the libraries listed below
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated last year
- ☆42Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆34Updated 3 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Code for the paper-"Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm" (https://arxiv.org/abs/2007.14966).☆57Updated 2 years ago
- BigKnow2022: Bringing Language Models Up to Speed☆14Updated last year
- ☆128Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- ☆64Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 3 years ago
- Transformers at any scale☆41Updated last year
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Contrastive Language-Image Pretraining☆142Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆130Updated 9 months ago
- ☆67Updated 2 years ago
- CLOOB training (JAX) and inference (JAX and PyTorch)☆70Updated 2 years ago
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆164Updated this week
- ☆93Updated last year
- RWKV model implementation☆37Updated last year
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- ☆20Updated last year
- Train vision models using JAX and 🤗 transformers☆97Updated this week
- This contains the Flax model of min(DALL·E) and code for converting it to PyTorch☆46Updated 2 years ago