nyunAI / PruneGPTLinks
☆51Updated last year
Alternatives and similar repositories for PruneGPT
Users that are interested in PruneGPT are comparing it to the libraries listed below
Sorting:
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 8 months ago
- entropix style sampling + GUI☆27Updated last year
- A pipeline for LLM knowledge distillation☆112Updated 10 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆157Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆110Updated 8 months ago
- Data preparation code for Amber 7B LLM☆94Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆56Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 3 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆143Updated 2 years ago
- ☆74Updated 2 years ago
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated 2 years ago
- ☆78Updated 2 years ago
- ☆120Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Let's create synthetic textbooks together :)☆76Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- 1.58-bit LLaMa model☆82Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 10 months ago
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆107Updated 6 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated 3 weeks ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆121Updated 11 months ago
- ☆101Updated last year
- My fork os allen AI's OLMo for educational purposes.☆29Updated last year
- ☆41Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago