anhvth / openslothLinks
☆241Updated 2 months ago
Alternatives and similar repositories for opensloth
Users that are interested in opensloth are comparing it to the libraries listed below
Sorting:
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- A pipeline for LLM knowledge distillation☆111Updated 8 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 7 months ago
- ☆138Updated 4 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆257Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Utils for Unsloth https://github.com/unslothai/unsloth☆181Updated this week
- A compact LLM pretrained in 9 days by using high quality data☆336Updated 8 months ago
- Advanced quantization toolkit for LLMs and VLMs. Support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration with Tra…☆764Updated this week
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆163Updated 4 months ago
- minimal GRPO implementation from scratch☆100Updated 9 months ago
- An Open Source Toolkit For LLM Distillation☆810Updated this week
- ☆159Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Tina: Tiny Reasoning Models via LoRA☆310Updated 2 months ago
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆145Updated last week
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆375Updated 5 months ago
- unsloth-5090-multiple☆60Updated 7 months ago
- ☆120Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆217Updated last month
- An extension of the nanoGPT repository for training small MOE models.☆218Updated 9 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆232Updated 6 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆136Updated 4 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 7 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- ☆212Updated last month
- ☆51Updated last year