lachlansneff / sparsellamaLinks
☆40Updated 2 years ago
Alternatives and similar repositories for sparsellama
Users that are interested in sparsellama are comparing it to the libraries listed below
Sorting:
- ☆35Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- ☆49Updated last year
- ☆73Updated last year
- tinygrad port of the RWKV large language model.☆46Updated 3 months ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 9 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- ☆22Updated last year
- ☆13Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated 2 years ago
- Training Models Daily☆17Updated last year
- ☆15Updated last year
- ☆31Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- ☆66Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Tune MPTs☆84Updated 2 years ago