Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"
☆91Sep 12, 2025Updated 7 months ago
Alternatives and similar repositories for softpick-attention
Users that are interested in softpick-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Fork of Flame repo for training of some new stuff in development☆19Apr 24, 2026Updated 2 weeks ago
- Code for the paper "Large Language Models Share Representations of Latent Grammatical Concepts Across Typologically Diverse Languages" (N…☆17Apr 13, 2025Updated last year
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Nov 15, 2025Updated 5 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆73Jul 8, 2025Updated 10 months ago
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆21Jun 4, 2025Updated 11 months ago
- Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."☆58Sep 25, 2025Updated 7 months ago
- An Ultra-Long Output Reinforcement Learning Approach☆23Jul 31, 2025Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 3 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".