tomaarsen / attention_sinks

Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
675Updated 7 months ago

Related projects

Alternatives and complementary repositories for attention_sinks