3outeille / GPTQ-for-RWKVLinks
☆13Updated 2 years ago
Alternatives and similar repositories for GPTQ-for-RWKV
Users that are interested in GPTQ-for-RWKV are comparing it to the libraries listed below
Sorting:
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- ☆40Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆107Updated last year
- ☆27Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆314Updated last year
- ☆53Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware☆73Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- Tune MPTs☆84Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 6 months ago
- Train Llama Loras Easily☆31Updated 2 years ago
- ☆49Updated last year
- Train your own small bitnet model☆75Updated 11 months ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- 4 bits quantization of LLMs using GPTQ☆49Updated 2 years ago