HazyResearch / lolcats
Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"
☆171Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for lolcats
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆172Updated 3 months ago
- ☆182Updated 3 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 3 weeks ago
- PyTorch implementation of models from the Zamba2 series.☆158Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆213Updated 2 months ago
- ☆49Updated 7 months ago
- Token Omission Via Attention☆119Updated 3 weeks ago
- code for training & evaluating Contextual Document Embedding models☆93Updated this week
- Multipack distributed sampler for fast padding-free training of LLMs☆176Updated 3 months ago
- ☆175Updated this week
- ☆116Updated 2 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆194Updated 6 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆261Updated last year
- Understand and test language model architectures on synthetic tasks.☆161Updated 6 months ago
- Experiments on speculative sampling with Llama models☆117Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆200Updated 5 months ago
- A pipeline for LLM knowledge distillation☆77Updated 3 months ago
- ☆92Updated last month
- Code repository for the c-BTM paper☆105Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆86Updated 3 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆111Updated 2 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- This is the official repository for Inheritune.☆105Updated last month
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆137Updated this week
- Manage scalable open LLM inference endpoints in Slurm clusters☆237Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (Official Code)☆133Updated last month
- ☆121Updated 9 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆73Updated 3 weeks ago