lachlansneff / sparsellama
☆40Updated last year
Related projects ⓘ
Alternatives and complementary repositories for sparsellama
- GPT-2 small trained on phi-like data☆65Updated 9 months ago
- tinygrad port of the RWKV large language model.☆43Updated 5 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated last year
- ☆49Updated 8 months ago
- ☆13Updated last year
- ☆72Updated last year
- Experimental sampler to make LLMs more creative☆30Updated last year
- ☆34Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 6 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆102Updated last year
- A library for incremental loading of large PyTorch checkpoints☆56Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 7 months ago
- Full finetuning of large language models without large memory requirements☆93Updated 10 months ago
- ☆15Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 7 months ago
- ☆21Updated 5 months ago
- ☆57Updated 11 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 2 months ago
- Tune MPTs☆84Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- RWKV-7: Surpassing GPT☆47Updated last week