predibase / llm_distillation_playbookLinks
Best practices for distilling large language models.
☆595Updated last year
Alternatives and similar repositories for llm_distillation_playbook
Users that are interested in llm_distillation_playbook are comparing it to the libraries listed below
Sorting:
- Automatically evaluate your LLMs in Google Colab☆677Updated last year
- LLM Workshop by Sourab Mangrulkar☆398Updated last year
- ☆559Updated last year
- An Open Source Toolkit For LLM Distillation☆814Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,212Updated last week
- Official repository for ORPO☆468Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,548Updated 10 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆1,494Updated 6 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆1,022Updated 8 months ago
- Easily embed, cluster and semantically label text datasets☆586Updated last year
- awesome synthetic (text) datasets☆315Updated last month
- Generative Representational Instruction Tuning☆680Updated 6 months ago
- ☆693Updated 7 months ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆377Updated 5 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- Train Models Contrastively in Pytorch☆769Updated 9 months ago
- ☆1,333Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆897Updated 2 months ago
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆561Updated last week
- System 2 Reasoning Link Collection☆863Updated 9 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 4 months ago
- Llama from scratch, or How to implement a paper without crying☆581Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆294Updated 9 months ago
- ☆1,035Updated last year
- Recipes to scale inference-time compute of open models☆1,120Updated 7 months ago
- Automatic evals for LLMs☆569Updated this week
- Fine-Tuning Embedding for RAG with Synthetic Data☆521Updated 2 years ago
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,083Updated 10 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆500Updated last year