mzbac / qlora-inference-multi-gpu
☆12Updated last year
Alternatives and similar repositories for qlora-inference-multi-gpu:
Users that are interested in qlora-inference-multi-gpu are comparing it to the libraries listed below
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Eh, simple and works.☆27Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- entropix style sampling + GUI☆25Updated 5 months ago
- Simple GRPO scripts and configurations.☆58Updated 2 months ago
- LLM reads a paper and produce a working prototype☆52Updated last week
- kimi-chat 测试数据☆7Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated last week
- Scripts to create your own moe models using mlx☆89Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Modified Beam Search with periodical restart☆12Updated 7 months ago
- Data preparation code for CrystalCoder 7B LLM☆44Updated 11 months ago
- ☆53Updated 10 months ago
- ☆34Updated 8 months ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Updated 5 months ago
- ☆17Updated last year
- Merge LLM that are split in to parts☆26Updated last year
- ☆20Updated 10 months ago
- ☆21Updated this week
- ☆15Updated last year
- ☆26Updated last year
- ☆33Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆59Updated 7 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆21Updated last month
- ☆34Updated 3 weeks ago
- Our data munging code.☆34Updated 6 months ago