agokrani / distillKitPlusLinks
Easy to use, High Performant Knowledge Distillation for LLMs
☆94Updated 5 months ago
Alternatives and similar repositories for distillKitPlus
Users that are interested in distillKitPlus are comparing it to the libraries listed below
Sorting:
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 5 months ago
- ☆51Updated last year
- A pipeline for LLM knowledge distillation☆109Updated 6 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- entropix style sampling + GUI☆27Updated 11 months ago
- Simple examples using Argilla tools to build AI☆56Updated 11 months ago
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆103Updated 10 months ago
- ☆102Updated last year
- ☆157Updated 6 months ago
- Fine-tunes a student LLM using teacher feedback for improved reasoning and answer quality. Implements GRPO with teacher-provided evaluati…☆46Updated 5 months ago
- ☆136Updated last year
- ☆67Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- ☆119Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆58Updated last year
- ☆136Updated 2 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated this week
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆116Updated 2 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago
- ☆55Updated 11 months ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆105Updated 2 months ago
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- This is the official repository for Inheritune.☆115Updated 8 months ago
- ☆45Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 9 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 7 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago