tcapelle / llm_recipes
A set of scripts and notebooks on LLM finetunning and dataset creation
☆109Updated 7 months ago
Alternatives and similar repositories for llm_recipes
Users that are interested in llm_recipes are comparing it to the libraries listed below
Sorting:
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆198Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆256Updated 10 months ago
- experiments with inference on llama☆104Updated 11 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆46Updated 11 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- ☆87Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultin…☆23Updated last year
- ☆92Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆104Updated 3 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆110Updated this week
- Prune transformer layers☆69Updated 11 months ago
- ☆120Updated last month
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆186Updated last year
- ☆117Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 9 months ago
- ☆204Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆121Updated last year
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆142Updated 2 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Resources relating to the DLAI event: https://www.youtube.com/watch?v=eTieetk2dSw☆185Updated last year
- Let's build better datasets, together!☆259Updated 4 months ago
- awesome synthetic (text) datasets☆281Updated 6 months ago
- ☆94Updated last year