mzbac / qlora-inference-multi-gpuLinks
☆13Updated 2 years ago
Alternatives and similar repositories for qlora-inference-multi-gpu
Users that are interested in qlora-inference-multi-gpu are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- A converter and basic tester for rwkv onnx☆43Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆51Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated 2 years ago
- Modified Beam Search with periodical restart☆12Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- ☆39Updated 7 months ago
- Instruct-tune LLaMA on consumer hardware☆72Updated 2 years ago
- Finetune any model on HF in less than 30 seconds☆56Updated 2 months ago
- ☆34Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆55Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- ☆37Updated 2 years ago
- ☆65Updated 7 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- entropix style sampling + GUI☆27Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated last month
- Eh, simple and works.☆27Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- FuseAI Project☆87Updated 10 months ago
- ☆35Updated 2 years ago
- ☆74Updated 2 years ago