tcapelle / llm_recipes
A set of scripts and notebooks on LLM finetunning and dataset creation
☆93Updated last month
Related projects ⓘ
Alternatives and complementary repositories for llm_recipes
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆118Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆195Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆236Updated 4 months ago
- LLM Workshop by Sourab Mangrulkar☆346Updated 5 months ago
- ☆91Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆221Updated 2 weeks ago
- experiments with inference on llama☆105Updated 5 months ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆66Updated this week
- awesome synthetic (text) datasets☆242Updated 3 weeks ago
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultin…☆23Updated 10 months ago
- Sample notebooks and prompts for LLM evaluation☆114Updated this week
- code for training & evaluating Contextual Document Embedding models☆117Updated this week
- ☆93Updated last month
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆83Updated 7 months ago
- Set of scripts to finetune LLMs☆36Updated 7 months ago
- End-to-End LLM Guide☆97Updated 4 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆97Updated 7 months ago
- A comprehensive deep dive into the world of tokens☆214Updated 4 months ago
- ☆191Updated 9 months ago
- ☆87Updated 9 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆246Updated 2 weeks ago
- Let's build better datasets, together!☆205Updated this week
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- An Open Source Toolkit For LLM Distillation☆356Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆229Updated 3 weeks ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 5 months ago
- Various installation guides for Large Language Models☆53Updated last week
- LoRA and DoRA from Scratch Implementations☆188Updated 8 months ago
- ☆40Updated 6 months ago