harubaru / convogptLinks
Conversational Language model toolkit for training against human preferences.
☆42Updated last year
Alternatives and similar repositories for convogpt
Users that are interested in convogpt are comparing it to the libraries listed below
Sorting:
- Platform and API Agnostic library for powering chatbots☆24Updated 2 years ago
- Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B☆62Updated 4 years ago
- Our data munging code.☆34Updated last month
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- Where we keep our notes about model training runs.☆16Updated 2 years ago
- ☆27Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- A ready-to-deploy container for implementing an easy to use REST API to access Language Models.☆66Updated 2 years ago
- Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance☆28Updated 2 years ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆56Updated 2 years ago
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆56Updated 3 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- rwkv_chatbot☆62Updated 2 years ago
- ☆75Updated 3 years ago
- Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.☆40Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- ☆27Updated 2 years ago
- k_diffusion wrapper included for k_lms sampling. fixed for notebook.☆21Updated 2 years ago
- ☆33Updated 2 years ago
- Sentencepiece based BPE tokenizer for English and Japanese language text.☆28Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- Repository with which to explore k-diffusion and diffusers, and within which changes to said packages may be tested.☆54Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago