mzbac / qlora-inference-multi-gpu
☆12Updated last year
Alternatives and similar repositories for qlora-inference-multi-gpu:
Users that are interested in qlora-inference-multi-gpu are comparing it to the libraries listed below
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- Eh, simple and works.☆27Updated last year
- ☆53Updated 9 months ago
- entropix style sampling + GUI☆25Updated 4 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- LLM reads a paper and produce a working prototype☆51Updated 2 weeks ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ☆32Updated this week
- Finetune any model on HF in less than 30 seconds☆58Updated 2 months ago
- ☆26Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated 4 months ago
- ☆51Updated 8 months ago
- Scripts to create your own moe models using mlx☆89Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆59Updated 7 months ago
- Universal text classifier for generative models☆22Updated 8 months ago
- ASR + diarization model server with speculative decoding☆59Updated 10 months ago
- Modified Beam Search with periodical restart☆12Updated 6 months ago
- Simple GRPO scripts and configurations.☆58Updated last month
- kimi-chat 测试数据☆7Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆26Updated last year
- ☆52Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated last month
- OpenPipe Reinforcement Learning Experiments☆20Updated 2 weeks ago
- A repository to store helpful information and emerging insights in regard to LLMs☆20Updated last year
- ☆36Updated 2 years ago
- Experimental sampler to make LLMs more creative☆30Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- RWKV centralised docs for the community☆21Updated this week
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆27Updated last week