Embedding the Self-Organisation of Deep Feature Maps in the Hamburger Framework can Yield Better and Interpretable Results
Proceedings of the International Joint Conference on Neural Networks
The popularity of neural networks that explicitly utilise the global correlation structure of their features have become vastly more popular ever since the Transformer architecture was developed. We propose to embed unsupervised Self-Organising Maps within neural networks as a means to model the global correlation structure. By enforcing topological preservation therein, such a neural network is able to represent more complex correlation structures and to produce interpretable visualisations as a byproduct. We validate this approach by comparing with the existing state-of-the-art attention substitute within its own 'Hamburger' framework and by illustrating maps learnt by the module. Overall, this paper serves as a proof of concept for integrating Self-Organising Maps within a supervised network.
Open Access Status
This publication is not available as open access