I think there already is some dispute about Geometric eep Learning, there was a post about it here the other day – I am curious if anybody has any opinions about Topological eep Learning (I'll abbreviate TL from now), and what it promises.
From what I have understood, what TL promises is a method of incorporating higher-order structural relationships in representations or architectures, and I am aware that some of these are used in biology, especially as molecules also have some topological properties (similar to the use cases of geometric deep learning I guess).
But again, I am just curious if these promises are realistic? My main questions are:
1) We can try to include higher-order relations, but GNNs can already do that can't they? We can just do higher-order message passing in GNNs, and how would a topological approach help it?
2) Including higher-order relations by simply looking at every possible higher-order interaction is computationally not feasible is it? Afaik, higher-order GNNs have also good expressive capacity, but sometimes are not used because of these limitations – would TL offer a way to do this faster?
3) I think similar to Geometric deep learning, sometimes it might look that there is fancy maths but no "groundbreaking" achievements – or I might be ignorant about this, apologies if so. Are there any problems where we would say "TL is necessary", or in a few years likely TL methods will be SOTA?
I think that position paper I mentioned refers to these problems, but as it stands it is a position paper, clearly people will be all for TL – I want an outside perspective if anyone has any knowledge, or criticisms.