nyunAI / PruneGPTLinks
☆51Updated last year
Alternatives and similar repositories for PruneGPT
Users that are interested in PruneGPT are comparing it to the libraries listed below
Sorting:
- Easy to use, High Performant Knowledge Distillation for LLMs☆92Updated 3 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆83Updated 3 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆63Updated last year
- entropix style sampling + GUI☆27Updated 9 months ago
- ☆54Updated 9 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- Let's create synthetic textbooks together :)☆75Updated last year
- A pipeline for LLM knowledge distillation☆108Updated 4 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆145Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆58Updated 11 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆26Updated 5 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆135Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Data preparation code for Amber 7B LLM☆91Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆107Updated 3 weeks ago
- A pipeline parallel training script for LLMs.☆153Updated 3 months ago
- Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆98Updated 3 weeks ago
- ☆102Updated 11 months ago
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- ☆134Updated this week
- ☆59Updated last month
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆41Updated 2 months ago
- ☆77Updated last year
- ☆45Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆74Updated last year