lachlansneff / sparsellama
☆40Updated last year
Related projects: ⓘ
- tinygrad port of the RWKV large language model.☆43Updated 3 months ago
- Full finetuning of large language models without large memory requirements☆94Updated 8 months ago
- ☆34Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆68Updated last year
- A library for incremental loading of large PyTorch checkpoints☆56Updated last year
- GPT-2 small trained on phi-like data☆65Updated 7 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆117Updated 8 months ago
- QuIP quantization☆41Updated 6 months ago
- ☆71Updated last year
- ☆55Updated 9 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated last year
- ☆48Updated 6 months ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 3 months ago
- ☆15Updated 10 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆21Updated 7 months ago
- ☆28Updated this week
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 5 months ago
- ☆21Updated 3 months ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆30Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 3 months ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆122Updated last year
- Code repository for the c-BTM paper☆105Updated 11 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Low-Rank adapter extraction for fine-tuned transformers model☆154Updated 4 months ago
- ☆23Updated 8 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆67Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆118Updated last year