recursal / ai-town-rwkv-proxy
Run a large AI town, locally, via RWKV !
☆151Updated last year
Alternatives and similar repositories for ai-town-rwkv-proxy:
Users that are interested in ai-town-rwkv-proxy are comparing it to the libraries listed below
- ☆111Updated 2 months ago
- All the world is a play, we are but actors in it.☆47Updated this week
- ☆152Updated 7 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated 10 months ago
- Generative Agents: Interactive Simulacra of Human Behavior☆96Updated last year
- ☆49Updated last week
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 9 months ago
- Merge Transformers language models by use of gradient parameters.☆205Updated 7 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆146Updated 6 months ago
- entropix style sampling + GUI☆25Updated 4 months ago
- ☆36Updated last year
- Multimodal computer agent data collection program☆122Updated last year
- Scripts to create your own moe models using mlx☆89Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆157Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆224Updated 10 months ago
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- A framework to enable multimodal models to play games on a computer.☆98Updated 11 months ago
- ☆65Updated 9 months ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆98Updated this week
- Full finetuning of large language models without large memory requirements☆93Updated last year
- ☆51Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 10 months ago
- AGI has been achieved externally☆9Updated last year
- ☆172Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆135Updated 3 weeks ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- ☆73Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago