lumpenspace / raft
RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly suited for the creation of agents that realistically emulate a specific human target.
☆74Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for raft
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆169Updated last week
- ☆86Updated 9 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆201Updated 5 months ago
- ☆91Updated last month
- Just a bunch of benchmark logs for different LLMs☆113Updated 3 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆73Updated last month
- A pipeline for LLM knowledge distillation☆77Updated 3 months ago
- Simple examples using Argilla tools to build AI☆38Updated this week
- Synthetic Data for LLM Fine-Tuning☆93Updated 11 months ago
- ☆102Updated 2 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆111Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 3 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 6 months ago
- Repository for “PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers”, NAACL24☆124Updated 4 months ago
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆46Updated 9 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- ☆116Updated 2 months ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆26Updated 8 months ago
- Set of scripts to finetune LLMs☆36Updated 7 months ago
- This repository implements the chain of verification paper by Meta AI☆156Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆141Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 6 months ago
- Function Calling Benchmark & Testing☆74Updated 4 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆273Updated last month
- ☆74Updated last week
- ☆103Updated 7 months ago
- ☆64Updated 5 months ago
- ☆73Updated 10 months ago