BlinkDL / RWKV-v2-RNN-PileLinks
RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
☆67Updated 2 years ago
Alternatives and similar repositories for RWKV-v2-RNN-Pile
Users that are interested in RWKV-v2-RNN-Pile are comparing it to the libraries listed below
Sorting:
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆65Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆131Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆43Updated 2 years ago
- One stop shop for all things carp☆59Updated 2 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Text-writing denoising diffusion (and much more)☆30Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆67Updated 3 years ago
- ☆27Updated 2 years ago
- ☆91Updated 2 years ago
- O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, …☆90Updated 2 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆117Updated 4 years ago
- Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression☆66Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆220Updated last year
- JAX implementation of VQGAN☆92Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Latent Diffusion Language Models☆69Updated last year
- This contains the Flax model of min(DALL·E) and code for converting it to PyTorch☆45Updated 3 years ago
- An experimental implementation of the retrieval-enhanced language model☆76Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- ☆66Updated last year