samchaineau / llm_slerp_generation
Repo hosting codes and materials related to speeding LLMs' inference using token merging.
☆35Updated 9 months ago
Alternatives and similar repositories for llm_slerp_generation:
Users that are interested in llm_slerp_generation are comparing it to the libraries listed below
- ☆48Updated 3 months ago
- Set of scripts to finetune LLMs☆36Updated 10 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 8 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆80Updated last week
- ☆125Updated last year
- A repository for research on medium sized language models.☆76Updated 8 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 2 months ago
- PB-LLM: Partially Binarized Large Language Models☆150Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆50Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆42Updated 6 months ago
- ☆43Updated 3 months ago
- ☆37Updated 4 months ago
- ☆71Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆114Updated 2 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆38Updated last year
- Train, tune, and infer Bamba model☆83Updated 3 weeks ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆32Updated last week
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated 3 weeks ago
- Prune transformer layers☆67Updated 8 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆101Updated 4 months ago
- ☆52Updated 8 months ago
- QuIP quantization☆48Updated 10 months ago
- ☆59Updated last week
- ☆42Updated last year
- ☆74Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 11 months ago