nyunAI / PruneGPTLinks
☆51Updated last year
Alternatives and similar repositories for PruneGPT
Users that are interested in PruneGPT are comparing it to the libraries listed below
Sorting:
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 9 months ago
- entropix style sampling + GUI☆27Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112Updated 8 months ago
- A pipeline for LLM knowledge distillation☆112Updated 10 months ago
- ☆56Updated last year
- LLM based agents with proactive interactions, long-term memory, external tool integration, and local deployment capabilities.☆108Updated 6 months ago
- Let's create synthetic textbooks together :)☆76Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Updated 2 years ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated last month
- ☆68Updated last year
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated 2 years ago
- ☆101Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Simple examples using Argilla tools to build AI☆57Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆144Updated 2 years ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆50Updated last month
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 10 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆78Updated 2 years ago
- Data preparation code for Amber 7B LLM☆94Updated last year
- ☆24Updated last year
- ☆74Updated 2 years ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Updated 2 years ago
- 1.58-bit LLaMa model☆82Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆103Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year