Towards understanding multiple attention sinks in LLMs (github.com)

thw20 4 hours ago

This project reveals an interesting phenomena, where LLM converts semantic non-informative tokens to attention sinks through middle layer MLP.

The converted sinks are termed secondary attention sinks as they are weaker then BOS attention sinks.

This might be related to layer specialisation in LLM!