tomaarsen / attention_sinks

Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
667Updated 6 months ago

Related projects

Alternatives and complementary repositories for attention_sinks