epfml / landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers
☆415Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for landmark-attention
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆613Updated 5 months ago
- ☆411Updated last year
- Official repository for LongChat and LongEval☆512Updated 5 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆370Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆178Updated 3 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆649Updated 3 months ago
- batched loras☆336Updated last year
- Merge Transformers language models by use of gradient parameters.☆201Updated 3 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆675Updated 7 months ago
- ☆527Updated 10 months ago
- ☆470Updated 2 months ago
- Tune any FALCON in 4-bit☆468Updated last year
- ☆454Updated last year
- A bagel, with everything.☆312Updated 7 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆435Updated 6 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆350Updated 8 months ago
- Official PyTorch implementation of QA-LoRA☆117Updated 8 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆707Updated 5 months ago
- ☆534Updated 11 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆528Updated 8 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆232Updated 5 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆558Updated 8 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆142Updated 9 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆594Updated 3 months ago
- ☆451Updated 3 weeks ago
- ☆505Updated 3 weeks ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆438Updated 9 months ago
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆373Updated 11 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆348Updated last year
- Experiments on speculative sampling with Llama models☆118Updated last year