DaertML / context_distillation
Framework to achieve context distillation in LLMs
☆13Updated last year
Alternatives and similar repositories for context_distillation:
Users that are interested in context_distillation are comparing it to the libraries listed below
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 3 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆36Updated last year
- Aioli: A unified optimization framework for language model data mixing☆23Updated 2 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 5 months ago
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 6 months ago
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- ☆15Updated this week
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆64Updated last year
- ☆81Updated last month
- Minimum Description Length probing for neural network representations☆19Updated 2 months ago
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆27Updated last year
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆10Updated this week
- Plug in and Play implementation of "Certified Reasoning with Language Models" that elevates model reasoning by 40%☆17Updated last year
- Repository for Skill Set Optimization☆12Updated 8 months ago
- The repository contains code for Adaptive Data Optimization☆20Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- This repository contains the ToolSelect dataset which was used to fine-tune Llama-2 70B for tool selection.☆20Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated this week
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆17Updated last week
- ☆27Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆29Updated 3 weeks ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆34Updated last year
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Updated last year
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆20Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆41Updated 4 months ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆73Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year