![[Image: 1qOIvRh.gif]](https://images.weserv.nl/?url=i.imgur.com%2F1qOIvRh.gif)
Intriguingly, both the Google Deepmind paper, "Early Visual Concept Learning" (June 2016) and the paper of mine, entitled "Thought curvature" (May 2016):
(1) Consider combining somethings in machine learning called translation invariant, and translation variant paradigms (i.e. disentangling factors of variation)
(2) Do (1) particularly in the regime of reinforcement learning, causal laws of physics, and manifolds.
FOOTNOTE:
Notably, beyond the Deepmind paper, thought curvature describes the (machine learning related) algebra of Supermanifolds, instead of mere manifolds.
QUESTION:
Given particular streams of evidence...(i.e. paper "Supersymmetric methods in the traveling variable"), is a degree of the super-manifold structure a viable path in the direction of Artificial General Intelligence?