anhvth / openslothLinks
☆207Updated 2 weeks ago
Alternatives and similar repositories for opensloth
Users that are interested in opensloth are comparing it to the libraries listed below
Sorting:
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆240Updated 11 months ago
- A compact LLM pretrained in 9 days by using high quality data☆330Updated 6 months ago
- A pipeline for LLM knowledge distillation☆109Updated 6 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆93Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 4 months ago
- Utils for Unsloth https://github.com/unslothai/unsloth☆155Updated last week
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆365Updated 3 months ago
- minimal GRPO implementation from scratch☆98Updated 7 months ago
- ☆157Updated 6 months ago
- Tina: Tiny Reasoning Models via LoRA☆296Updated 3 weeks ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆249Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 2 months ago
- ☆119Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆125Updated 2 months ago
- ☆136Updated last month
- A repository aimed at pruning DeepSeek V3, R1 and R1-zero to a usable size☆74Updated last month
- A family of compressed models obtained via pruning and knowledge distillation☆352Updated 11 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆260Updated last year
- An Open Source Toolkit For LLM Distillation☆740Updated 3 months ago
- ☆196Updated 3 months ago
- An extension of the nanoGPT repository for training small MOE models.☆197Updated 7 months ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆191Updated last week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆227Updated this week
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆131Updated 8 months ago
- PyTorch building blocks for the OLMo ecosystem☆305Updated last week