josejuanmartinez / mindcraftLinks
Mindcraft, the open-source NLP solution to craft the minds of your NPC characters for your videogames.
☆45Updated last year
Alternatives and similar repositories for mindcraft
Users that are interested in mindcraft are comparing it to the libraries listed below
Sorting:
- LLM-powered NPCs running on your hardware☆331Updated last year
- Let's build better datasets, together!☆262Updated 9 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Train Llama Loras Easily☆30Updated 2 years ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆288Updated 7 months ago
- Tune MPTs☆84Updated 2 years ago
- ☆124Updated 11 months ago
- This repository's goal is to precompile all past presentations of the Huggingface reading group☆48Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- Open-Source Generative Agents is a community-driven fork of 'Generative Agents,' aimed at enabling compatibility with open-source Large L…☆61Updated last year
- Drop in replacement for OpenAI, but with Open models.☆153Updated 2 years ago
- Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.☆335Updated 9 months ago
- Domain Adapted Language Modeling Toolkit - E2E RAG☆328Updated 11 months ago
- ☆207Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- ☆169Updated 7 months ago
- Tune any FALCON in 4-bit☆464Updated 2 years ago
- Multi-Domain Expert Learning☆66Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆76Updated last year
- ☆463Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- A bagel, with everything.☆324Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Fine-tuning LLMs using QLoRA☆265Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 4 months ago